Can we first make sure that the other factors dont plat a rol in this
benchmark? I have a couple (14+) Dell servers here, and i know for a
fact that most of their RAID systems are heavely borked in the
performance department.
All kernels upto 2.4.1x performed horibly, and all kernels after 2.4.16
or so perform horibly again! Somewhere inbetween some magic seemed to
happen in the block layer / elevator code / etc, that caused performance
to increase upto 100% on the Dell PERC adapters. (started @ the first
release of the AA VM). However after a few small releases, the
performance went down to the same old horible level again.
So it might well be (very likely actualy) that the tested redhat 2.4.14
is a performance 'sweet spot' kernel, where kernels < 2.4.13 and >
2.4.15 or 16 are definatly not.
The raid performance is a whole issue on its self. part seems to be
block IO / Elevator / driver related, and a part seems to be adapter
firmware related. (And adaptec refusing to release their drivers).
However since both 2.4.17 and 2.4.7 have the same horible RAID
performance, i do not think the VM is responcible for that part ;-)
A good test would be to configure those disks on a normal AIC7xxx
adapter, and software raiding them together. The performance of that is
'equal' between those different kernels, and much much higher then the
hardware raid. Benchmarking with this would give much better results for
benchmarking VM's
-- Chris
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/