Something like that seems like the only obvious way to balance how soon
these caches are flushed without over- or under-kill.
Also, shouldn't shrink_dqcache_memory use priority rather than
DEF_PRIORITY? I'm also not sure what the reasoning is behind the
nr_pages=chunk_size reset.
In the case that Leigh and I are seeing (and my readprofile runs) it
sounds like shrink_cache is getting called a ton, while the bloated d/i
caches are flushed too little, too late.
Just $0.02 from a newby. ;)
Thanks for the tip, Mike,
-- Ken. brownfld@irridia.com
On Sun, Dec 09, 2001 at 06:17:11PM +0100, Mike Galbraith wrote: | On Sun, 9 Dec 2001, Leigh Orf wrote: | | > In a personal email, Mike Galbraith wrote to me: | > | > | On Sat, 8 Dec 2001, Leigh Orf wrote: | > | | > | > inode_cache 439584 439586 512 62798 62798 1 | > | > dentry_cache 454136 454200 128 15140 15140 1 | > | | > | I'd try moving shrink_[id]cache_memory to the very top of vmscan.c::shrink_caches. | > | | > | -Mike | > | > Mike, | > | > I tried what you suggested starting with a stock 2.4.16 kernel, and it | > did fix the problem with 2.4.16 ENOMEM being returned. | > | > Now with that change and after updatedb runs, here's what the memory | > situation looks like. Note inode_cache and dentry_cache are almost | > nothing. Dunno if that's a good thing or not, but I'd definitely | | Almost nothing isn't particularly good after updatedb ;-) | | > consider this for a patch. | | No, but those do need faster pruning imho. The growth rate can be | really really fast at times. | | -Mike - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/