It's not really CPU load - the loadaverage in Linux (and some other UNIXes
too) also accounts for disk wait.
> - the meminfo shows me great difference to former versions in the balancing of
> inact_dirty and active. This pre10 tends to have a _lot_ more inact_dirty pages
> than active (compared to pre9 and before) in my test. I guess this is intended
> by this (used-once) patch. So take this as a hint, that your work performs as
> expected.
No, I think they are related, and bad. I suspect it just means that pages
really do not get elevated to the active list, and it's probably _too_
unwilling to activate pages. That's bad too - it means that the inactive
list is the one solely responsible for working set changes, and the VM
won't bother with any other pages. Which also leads to bad results..
That's always the downside with having multiple lists of any kind - if the
balance between the lists is bad, performance will be bad. Historically,
the active list was the big one, and the other ones mostly didn't matter,
which makes the balancing issue much less noticeable.
[ This is also the very same problem we used to have with buffer cache
pages vs mapped pages vs other caches ]
The fix may be to just make the inactive lists not do aging at all.
Linus
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/