On Tue, Oct 01, 2002 at 02:43:30PM -0600, Andreas Dilger wrote:
> As a result, if the size of the directory + inode table blocks is larger
> than memory, and also larger than 1/4 of the journal, you are essentially
> seek-bound because of random block dirtying.
> You should see what the size of the directory is at its peak (probably
> 16 bytes * 300k ~= 5MB, and add in the size of the directory blocks
> (128 bytes * 300k ~= 38MB) and make the journal 4x as large as that,
> so 192MB (mke2fs -j -J size=192) and re-run the test (I assume you have
> at least 256MB+ of RAM on the test system).
Hm. But all of that won't help if you need to read inodes from disk first,
right? (until that inode allocation in chunks implemented, of course).
BTW, in case of inode allocation in chunks attached to directory blocks,
you won't get any benefit in case if application creates file in some
tempoarry dir and then rename()s it to its proper place, or am I missing
something?
> What is very interesting from the above results is that the CPU usage
> is _much_ smaller for ext3+htree than for reiserfs. It looks like
This is only in case of deletion, probably somehow related to constant item
shifting when some of the items are deleted.
Bye,
Oleg
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/