Yes, scanning too much is mainly a cpu waste, but it also has the bad
effect of degrading quality of the long term aging information
(because pages aren't fairly aged while they're on the inactive queues.)
I'm thinking about an alternative way of aging that lets ->age be a
signed value, then when you get a surge in demand for deactivation
you just raise the threshold at which deactivation takes place. This
would be in addition to scanning more, but should save a lot of cpu.
This is exactly at the time that it's good to save cpu. There's still
a lot of thinking to do about what the function of the current
exponential down-aging really is, and how to capture the good effects
with subtracts instead of shifts. Caveat: not 2.4 stuff, obviously.
> So the global inactive_shortage() decision is certainly an important
> one: it should trigger early enough to matter, but not so early that
> we trigger it even when most local zones are really totally saturated
> and we really shouldn't be scanning at all.
Yes. The inactive shortage needs to be a function of the length of
the inactive_dirty queue rather than just the amount that free pages
is less than some fixed minimum. The target length of the
inactive_dirty queue in turn can be a function of the global free
shortage (which is where the minimum free numbers get used) and the
transfer rate of the disk(s). Again, experimental - without careful
work a feedback mechanism like this could oscillate wildly. It's most
probably the way forward in the long run though.
-- Daniel - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/