heh. Yep, Roger finally nailed it, I think.
Roy says the bug was fixed in rmap11c. Changelog says:
rmap 11c:
...
- elevator improvement (Andrew Morton)
Which includes:
- queue_nr_requests = 64;
- if (total_ram > MB(32))
- queue_nr_requests = 128; + queue_nr_requests = (total_ram >> 9) & ~15; /* One per half-megabyte */
+ if (queue_nr_requests < 32)
+ queue_nr_requests = 32;
+ if (queue_nr_requests > 1024)
+ queue_nr_requests = 1024;
So Roy is running with 1024 requests.
The question is (sorry, Roy): does this need fixing?
The only thing which can trigger it is when we have
zillions of threads doing reads (or zillions of outstanding
aio read requests) or when there are a large number of
unmerged write requests in the elevator. It's a rare
case.
If we _do_ need a fix, then perhaps we should just stop
using READA in the readhead code? readahead is absolutely
vital to throughput, and best-effort request allocation
just isn't good enough.
Thoughts?
-
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/