It's not as random as that. The idea being considered was: suppose a
program starts up, goes through a period of intense, cache-sucking
activity, then exits. Could we reload the applications it just
displaced so that the disk activity to reload them doesn't have to take
place the first time the user touches the keyboard/mouse. Sure, we
obviously can, with how much complexity is another question entirely ;-)
So probably, we could eliminate more than 80% of the latency we now see
in such a situation, I was being conservative.
Now what's the cost in battery life? Suppose it's a 128 meg machine,
1/3 filled with program text and data. Hopefully, the working sets
that were evicted are largely coherent so we'll read it back in at a
rate not too badly degraded from the drive's transfer rate, say 5
MB/sec. This gives about three seconds of intense reading to restore
something resembling the previous working set, then the disk can spin
down and perhaps the machine will suspend itself. So the question is,
how much longer did the machine have to run to do this? Well, on my
machine updatedb takes 5-10 minutes, so the 3 seconds of activity
tacked onto the end of the episode amounts to less than 1%, and this is
where the 1% figure came from.
I'm not saying this would be an easy hack, just that it's possible and
the numbers work.
-- Daniel - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/