Just to be clear on what the "max latency" number is, it's the I/O
request within tiobench that waited the longest. I.E. The process notes
the time before it's ready to make the request, then notes the time
after the request is fullfilled. With a 2048MB file and a 4096 byte
block, there may be over 500,000 requests.
It's a relatively small number of requests that have the big latency
wait, so depending on the I/O requests your other applications make
during the test, a long wait may not be obvious, unless one or your
I/O's gets left at the end of the queue for a long time.
This is sometimes referred to as a "corner case".
The point where the "# of threads" manifests the "big latency
wall" is to note a dramatic change in longest I/O latency. This
point varies between the kernel trees.
The "big latency phenomenon" has been in the 2.4 tree at least
since 2.4.17 which is the first kernel I have this measurement
for. It probably goes back much further.
read_latency2
-------------
I tested read_latency2 with 2.4.19-pre5. pre5 vanilla hits
a wall at 32 tiobench threads for sequential reads. With
read_latency2, the wall is around 128.
For random reads, pre5 hits a wall at 64 threads. With
read_latency2, the wall is not apparent even with 128 threads.
read_latency2 appears to reduce sequential write latency
too, but not as dramatically as in the read tests.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/