We tend to use dbench in two modes nowadays. One is the "RAM only"
mode, where the run completes before hitting disk at all. That's
a very useful and repeatable test for CPU efficiency and lock contention.
The other mode is of course when there are enough clients and
enough dirty data for the test to go to disk. As Rik says, this
tends to be subject to chaotic effects, and it is also extremely
non linear.
Because when the run slows down a little bit, it takes longer, so
more data becomes eligible for time-expiry-based writeback, which
causes more IO, which causes the run to take longer, etc, etc.
Yes, one does tend still to keep one's eye on the "heavy" dbench
throughput, but I suspect that tuning for this workload is a bad
thing overall. This is because good dbench numbers come from
allowing a large amount of dirty data to float about in memory
(it will never get written out). But for real workloads which
don't delete their own output 30 seconds later, we want to start
writeback earlier. To use the disk bandwidth more smoothly
and to decrease memory allocation latency.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/