Indeed, it should treat all memory equally, except when we
really have far too much cache. I'll resend the patch with
the subject clearly marked since this trivial thing really
does need testers ;)
regards,
Rik
-- IA64: a worthy successor to i860.http://www.surriel.com/ http://distro.conectiva.com/
--- mm/vmscan.c.orig Sun Sep 16 16:44:14 2001 +++ mm/vmscan.c Sun Sep 16 16:49:09 2001 @@ -731,6 +731,8 @@ */ #define too_many_buffers (atomic_read(&buffermem_pages) > \ (num_physpages * buffer_mem.borrow_percent / 100)) +#define too_much_cache (page_cache_size - swapper_space.nrpages) > \ + (num_physpages * page_cache.borrow_percent / 100)) int refill_inactive_scan(unsigned int priority) { struct list_head * page_lru; @@ -793,6 +795,18 @@ * be reclaimed there... */ if (page->buffers && !page->mapping && too_many_buffers) { + deactivate_page_nolock(page); + page_active = 0; + } + + /* + * If the page cache is too large, move the page + * to the inactive list. If it is really accessed + * it'll be referenced before it reaches the point + * where we'll reclaim it. + */ + if (page->mapping && too_much_cache && page_count(page) <= + (page->buffers ? 2 : 1)) { deactivate_page_nolock(page); page_active = 0; } --- mm/swap.c.orig Sun Sep 16 16:50:43 2001 +++ mm/swap.c Sun Sep 16 16:50:58 2001 @@ -64,7 +64,7 @@
buffer_mem_t page_cache = { 2, /* minimum percent page cache */ - 15, /* borrow percent page cache */ + 60, /* borrow percent page cache */ 75 /* maximum */ };
- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/