> Now, I think the theory itself is OK. The problem is that the stuff in
> buffer/caches is to sticky. It does not go away when "more important"
> uses for memory come up. Or at least it does not go away fast enough.
>
Then we need a larger free target to cope with the slow cache freeing.
> > free RAM, just like you need "excess horsepower" to make
> > automobiles drivable. That free RAM is the needed "rubber-band"
> > to absorb the dynamics of real-world systems.
>
> Question: what about just setting a maximum limit to the cache/buffer
> size. Either absolute, or as a fraction of total available memory? Sure,
> it maybe a waste of memory in most situations, but sometimes the
> administrator/user of a system simply "knows better" than the FVM (F ==
> Fine ? :-)
[...]
> I know, the tuning-knob approach is frowned upon. But sometimes there
> are workloads where even the best VM may not know how to react
> correctly.
Wasting memory "in most situations" isn't really an option. But I
see nothing wrong with "knobs" as long as they are automatic by
default. Those who want to optimize for a corner case can
go and turn off the autopilot.
Helge Hafting
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/