There was an improvement (reduction) in max latency
during sequential _writes after get_request starvation
went in. Tiobench didn't show an improvement for seq _read
max latency though. read_latency2 makes the huge difference.
The sequential read max latency walls for various trees looks like:
tree # of threads
rmap 128
ac 128
marcelo 32
linus 64
2.5-akpm-everything >128
2.4 read latency2 >128
I.E. tiobench with threads > the numbers above would probably
give the impression the machine was locked up or frozen if your
read request was the unlucky max. The average latencies are
generally reasonable. It's the max, and % of high latency
requests that varies most between the trees.
Using the updated tiobench.pl in
http://prdownloads.sourceforge.net/tiobench/tiobench-0.3.3.tar.gz
(actually - http://home.earthlink.net/~rwhron/kernel/tiobench2.pl
which is very similar to the one in tiobench-0.3.3)
On the 4 way 4GB box:
./tiobench.pl --size 8192 --numruns 3 --block 16384 --threads 1 \
--threads 32 --threads 64 --threads 128 --threads 256.
On the k6-2 with 384M ram:
/tiobench.pl --size 2048 --numruns 3 --threads 8 --threads 16 \
--threads 32 --threads 64 --threads 128
> http://www.zip.com.au/~akpm/linux/patches/2.4/2.4.19-pre5/
Thanks for updating read_latency2! I'm trying it and your other
patches on 2.4.19-pre5. :)
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/