Well, just to emphasize the "block group" issues, you could try testing
with a 1kB or 2kB block filesystem. This will give you 64x or 8x as
many groups as a 4kB block filesystem, respectively.
A more "valid" test, IMHO, would be "untar the kernel, (flush buffers),
build kernel" on both the original, and your "all in one group" inode
allocation heuristics. It should be especially noticable on a 1kB
filesystem. What this will show (I think) is that while untar/read
with your method will be fast (all inodes/files contiguous on disk)
once you start trying to write to that filesystem, you will have more
fragmentation/seeking for the writes. It may be that with large-memory
systems you will cache so much you don't see a difference, hence the
(flush buffers) part, which is probably umount, mount.
An even better test would be untar kernel, patch up a few major versions,
then try to compile. The old heuristic would probably be OK, as there
is space in each group for files to grow, while your heuristic would
move files into groups other than their parent because there is no space.
In the end, though, while the old heuristic has a good theory, it _may_
be that in practise, you are _always_ seeking to get data from different
groups, rather than _theoretically_ seeking because of fragmented files.
I don't know what the answer is - probably depends on finding "valid"
benchmarks (cough).
Cheers, Andreas
-- Andreas Dilger http://sourceforge.net/projects/ext2resize/ http://www-mddsp.enel.ucalgary.ca/People/adilger/- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/