Ha. For once you're both wrong but not where you are thinking. One of
the few places that I actually hacked Linux was for exactly this - it was
in the 0.99 days I think. I saved the list of I/O's in a file and filled
the buffer cache with them at next boot. It actually didn't help at all.
I don't remember why, maybe it was back so long ago that I didn't have the
memory, but I think it was more subtle than that. It's basically a queuing
problem and my instincts were wrong, I thought if I could get all the data
in there then things would go faster. If you think through all the stuff
going on during a boot it doesn't really work that way.
Anyway, a _much_ better thing to do would be to have all this data laid
out contig, then slurp in all the blocks in on I/O and then let them get
turned into files. This has been true for the last 30 years and people
still don't do it. We're actually moving in this direction with BitKeeper,
in the future, large numbers of small files will be stored in one big file
and extracted on demand. Then we do one I/O to get all the related stuff.
Dave Hitz at NetApp is about the only guy I know who really gets this,
Daniel Phillips may also get it, he's certainly thinking about it. Lots
of little I/O's == bad, one big I/O == good. Work through the numbers
and it starts to look like you'd never want to do less than a 1MB I/O,
probably not less than a 4MB I/O.
----- Larry McVoy lm at bitmover.com http://www.bitmover.com/lm - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/