I disagree. It is true that a VM could be designed sufficiently complex that
it would properly analyze every possible sequence of execution and have perfect
prescience. It would probably take a few hundred gigabytes of table structure
to do that and that in itself will slow down the VM just scanning those tables,
I dare say.
In short, no VM is going to work perfectly -- it is extrapolating a model of
behavior to a real world sequence of events and as such there will always be
some real world set of programs and events that will make it worse than some
other model of behavior (VM), including the one that never pages at all. We
just want that to happen rarely (whatever that means).
A VM that is working properly is one that satisfies the beholder (sort of like
beauty). And in fact, if you look at the various similar discussions on
Microsoft newsgroups (sorry ;-), you may notice they don't seem to be able
to come up with a mechanism that handles large uniform access working sets
and still works well with "normal" (highly peaked) working sets. So I doubt
it is an easy problem.
--Charles
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/