Naturally. However, this is orthogonal. Consider the case where you've hit
the wall and the inactive list has suffered sudden depletion. At this point
you have to deactivate a large number of pages and you will have few or no
intervening age-up events (because you hit the wall and nobody's moving).
It's a useless waste of CPU and real time to cycle through the active list 5
times to deactivate enough pages. You should cycle through at most twice,
once to age up any pages with Ref set and the second time to deactivate the
required number of pages according to a threshold you estimated on the first
pass.
This is just the first common example that came to mind where a variable
deactivation threshold is obviously desirable, I'm sure there are others.
> When we go one step further, where the working set approaches
> the size of physical memory, we should probably start doing
> load control FreeBSD-style ... pick a process and deactivate
> as many of its pages as possible. By introducing unfairness
> like this we'll be sure that only one or two processes will
> slow down on the next VM load spike, instead of all processes.
>
> Once we reach permanent heavy overload, we should start doing
> process scheduling, restricting the active processes to a
> subset of all processes in such a way that the active processes
> are able to make progress. After a while, give other processes
> their chance to run.
No question about the need for higher level process control, but the low
level machinery could still be improved, don't you think?
-- Daniel - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/