snip
>io_load:
>Kernel [runs] Time CPU% Loads LCPU% Ratio
>2.5.68 3 492 15.9 167.1 19.7 6.23
>2.5.68-mm1 4 128 59.4 47.6 19.4 1.62
>2.5.68-mm2 4 131 58.8 47.0 18.9 1.64
>2.5.68-mm3 4 271 28.4 89.2 17.9 3.39
>2.5.69 4 343 22.7 120.5 19.8 4.29
>2.5.69-mm3 4 319 24.5 105.3 18.1 4.04
>
snip
>
>dbench_load:
>Kernel [runs] Time CPU% Loads LCPU% Ratio
>2.5.68 3 412 18.4 5.3 47.6 5.22
>2.5.68-mm1 4 361 21.1 5.5 54.0 4.57
>2.5.68-mm2 4 345 22.0 4.8 49.3 4.31
>2.5.68-mm3 4 721 10.5 6.8 33.6 9.01
>2.5.69 4 374 20.3 5.0 48.1 4.67
>2.5.69-mm3 4 653 11.6 6.2 34.0 8.27
>
>Very similar to 2.5.68-mm3
>
Thanks again Con. These two benchmarks especially are fairly suboptimal
compared with the 68-mm2 days... I hope it is just the larger request queue
size in place in the rq-dyn patch in mm. If you get some time, could you
possibly change include/linux/blkdev.h:BLKDEV_MAX_RQ from 1024 to 128 and
bench these two loads on that setting.
Cheers,
Nick
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/