Jens, I think I prefer fifo_batch=16. We do need to expose
these in /somewhere so people can fiddle with them.
>...
> mem_load:
> Kernel [runs] Time CPU% Loads LCPU% Ratio
> 2.5.44 [3] 114.3 67 30 2 1.60
> 2.5.44-mm1 [3] 159.7 47 38 2 2.24
> 2.5.44-mm2 [3] 116.6 64 29 2 1.63
> 2.5.44-mm4 [3] 114.9 65 28 2 1.61
> 2.5.44-mm5 [4] 114.1 65 30 2 1.60
> 2.5.44-mm6 [3] 226.9 33 50 2 3.18
>
> Mem load has dropped off again
Well that's one interpretation. The other is "goody, that pesky
kernel compile isn't slowing down my important memory-intensive
whateveritis so much". It's a tradeoff.
It appears that this change was caused by increasing the default
value of /proc/sys/vm/page-cluster from 3 to 4. I am surprised.
It was only of small benefit in other tests so I'll ditch that one.
Thanks.
(You're still testing with all IO against the same disk, yes? Please
rememeber that things change quite significantly when the swap IO
or the io_load is against a different device)
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/