2.4-aa is outperforming 2.5 in almost all tiobenchs results, so I doubt
the elevator is that bad and could explain such drop in performance.
I suspect it must be something on the lines of the filesystem doing
synchronous I/O for some reason inside writepage, like doing a
wait_on_buffer for every writepage, generating the above fake results.
Note the 0% cpu time. You're not benchmarking the vm here. Infact I
would be interested to see the above repeated on ext2.
It's not true that ext3 is sharing the same writepage of ext2 as you
said in a earlier email, the ext3 writepage starts like this:
static int ext3_writepage(struct page *page)
{
struct inode *inode = page->mapping->host;
struct buffer_head *page_buffers;
handle_t *handle = NULL;
int ret = 0, err;
int needed;
int order_data;
J_ASSERT(PageLocked(page));
/*
* We give up here if we're reentered, because it might be
* for a different filesystem. One *could* look for a
* nested transaction opportunity.
*/
lock_kernel();
if (ext3_journal_current_handle())
goto out_fail;
needed = ext3_writepage_trans_blocks(inode);
if (current->flags & PF_MEMALLOC)
handle = ext3_journal_try_start(inode, needed);
else
handle = ext3_journal_start(inode, needed);
and even the ext2 writepage can be synchronous if it has to call
get_block. Infact I would reccomend to fill the "foo" file with zeros
and not to have holes in it just to avoid additional synchronous fs
overhead and to only be sync in the inode map lookup.
Andrea
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/