Seems that SDET has many threads performing seeky reads against a
cached file. The average number of pagecache probes in a single
do_pagecache_readahead() is six, which seems reasonable.
The patch (from Anton) flips the locking around to optimise for the
fast case (page was present). So the kernel takes the lock less often,
and does more work once it has been acquired.
=====================================
--- 2.5.16/mm/readahead.c~anton-readahead-locking Sun May 19 11:49:45 2002
+++ 2.5.16-akpm/mm/readahead.c Sun May 19 12:02:58 2002
@@ -117,25 +117,27 @@ void do_page_cache_readahead(struct file
/*
* Preallocate as many pages as we will need.
*/
+ read_lock(&mapping->page_lock);
for (page_idx = 0; page_idx < nr_to_read; page_idx++) {
unsigned long page_offset = offset + page_idx;
if (page_offset > end_index)
break;
- read_lock(&mapping->page_lock);
page = radix_tree_lookup(&mapping->page_tree, page_offset);
- read_unlock(&mapping->page_lock);
if (page)
continue;
+ read_unlock(&mapping->page_lock);
page = page_cache_alloc(mapping);
+ read_lock(&mapping->page_lock);
if (!page)
break;
page->index = page_offset;
list_add(&page->list, &page_pool);
nr_to_really_read++;
}
+ read_unlock(&mapping->page_lock);
/*
* Now start the IO. We ignore I/O errors - if the page is not
-
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/