In one sense or another some sort of graceful transition to unfair
behavior could be considered a kind of thrashing control; how meaningful
that is in the context of disk I/O is a question I can't answer directly,
though. Do you have any comments on this potential strategic unfairness?
On Fri, May 24, 2002 at 02:26:48AM -0700, Andrew Morton wrote:
> I've been testing this extensively on 2.5 + multipage BIO I/O and when you
> increase readahead from 32 pages (two BIOs) to 64 pages (4 BIOs), 2.5 goes
> from perfect to horrid - each threads grabs the disk head and performs many,
> many megabytes of read before any other thread gets a share. Net effect is
> that the tiobench numbers are great, but any operation which involves
> reading disk has 30 or 60 second latencies.
> Interestingly, it seems specific to IDE. SCSI behaves well.
> I have tons of traces and debug code - I'll bug Jens about this in a week or
> so.
What kinds of phenomena appear to be associated with IDE's latencies?
I recall some comments from prior IDE maintainers on poor interactions
between generic disk I/O layers and IDE drivers, particularly with
respect to small transactions being given to the drivers to perform.
Are these comments still relevant, or is this of a different nature?
Cheers,
Bill
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/