AKPM did some simulated testing of this and it wasn't too bad, but there
is of course a tradeoff. If you pack your files more closely to improve
short term performance, you can cause additional fragmentation in the
future, if some of those files are randomly deleted.
However, I don't think the orlov allocator is "FAT-like" and just fills up
everything sequentially.
What would be an interesting test, let's say, would be lcloning the very
base 2.4.0 BK repository, and then applying all of the changesets in
sequence (or the equivalent with tarballs and patches), and timing
"make" between each run (if there was a way to flush the page cache for
that filesystem it would be very helpful). This will easily simulate
a long-life directory tree in a very reproducable and quantitative way.
Cheers, Andreas
-- Andreas Dilger http://www-mddsp.enel.ucalgary.ca/People/adilger/ http://sourceforge.net/projects/ext2resize/- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/