> --- vm-2.4.7/drivers/block/ll_rw_blk.c.2 Fri Aug 3 19:06:46 2001
> +++ vm-2.4.7/drivers/block/ll_rw_blk.c Fri Aug 3 19:32:46 2001
> @@ -1037,9 +1037,16 @@
> * water mark. instead start I/O on the queued stuff.
> */
> if (atomic_read(&queued_sectors) >= high_queued_sectors) {
> - run_task_queue(&tq_disk);
> - wait_event(blk_buffers_wait,
> - atomic_read(&queued_sectors) < low_queued_sectors);
... OUCH ...
> bah. Doesn't fix it. Still waiting indefinately in ll_rw_blk().
And it's obvious why.
The code above, as well as your replacement, are have a
VERY serious "fairness issue".
task 1 task 2
queued_sectors > high
==> waits for
queued_sectors < low
write stuff, submits IO
queued_sectors < high (but > low)
....
queued sectors still < high, > low
happily submits more IO
...
etc..
It is quite obvious that the second task can easily starve
the first task as long as it keeps submitting IO at a rate
where queued_sectors will stay above low_queued_sectors,
but under high_queued sectors.
There are two possible solutions to the starvation scenario:
1) have one threshold
2) if one task is sleeping, let ALL tasks sleep
until we reach the lower threshold
regards,
Rik
-- Virtual memory is like a game you can't win; However, without VM there's truly nothing to lose...http://www.surriel.com/ http://distro.conectiva.com/
Send all your spam to aardvark@nl.linux.org (spam digging piggy)
- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/