Hi Rik,
I understand that auto-balancing code that deals with all
situations is very hard to design; so let me share my experience
on other Unix systems (from a user/administrator point of view):
I have used Unix systems (mainly HPUX) for several years as personal
workstations or servers and buffer cache usage were very differents:
On workstations, you are mainly looking for fast interactive response
time and you want to dedicate as much memory as possible to running
processes so limiting buffer cache to 10% of physical memory (these
workstations had typically 32 - 64 Mb of RAM) was good.
On file servers, interactive response time is much less important than
file/network througput. In this case, having 80% of RAM used for buffer
cache is good and you may even want to not let it go below 50% even if
it slows down some batch processes.
Both cases were easily handled by 2 HPUX kernel tunable parameters that
defined minimum and maximum number of pages that could be used by the
buffer cache.
This could be implemented on Linux via /proc. I know it is already done
for minimum limit (in 2.2, I have no experience with 2.4 yet).
I have found some situations where not being able to force a maximum
limit was a problem.
You could argue that with a good load balancing algorithm user
defined limits are useless. Believe me, my experience on HPUX
workstations showed that lowering its max. limit from 50% (default
value) to 10% turned some sluggish machines into speed daemons !
Just my 0.02$
Hope this helps
-- Joseph Bueno - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/