Well, that coincides nicely with the period when most of us will still
be using 32 bit processors, don't you think? If we solve the problem
of internal fragmentation (as Reiserfs has) and memory management then
we can keep going on up to 64K blocksize, giving a 256 TB limit. Not
too shabby. (Some things need fixing after that, e.g. Ext2 directory
entry record sizes.)
At the same time, the larger block size means that, to transfer a given
number of blocks at random locations, less time is spent seeking and
less time in setup. Larger blocks are good - there's a reason why the
industry is heading in that direction. If it also helps us with our
partition size limits, then why not take advantage of it. I'd say, do
both, use the logical blocksize measurements and provide 64 bit block
numbers as an option.
Note that there is a bug-by-design that comes from measuring device
capacity in 1K blocks the way we do now: on a device with 512 byte
blocks we can't correctly determine when a block access is out of
range. Measuring in logical block size would fix that cleanly.
-- Daniel - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/