Yes, exactly, did you read my mind? Page units are the natural quantum
of the time base for the whole mm. When we clock all mm events according
to the (__alloc_page << order) timebase then lots of memory spikes are
magically transformed into smooth curves and it becomes immediately
obvious how much scanning we need to do at each instant. Now, warning,
this is a major shift in viewpoint and I'll wager, unwelcome on this side
of the 2.5 split. I'd be happy to work with you doing a skunkworks-type
proof-of-concept though.
> memory_clock_rubberband is calculated to be close to what
> memory_clock should have been for MEMORY_CLOCK_WINDOW seconds
> earlier, using current values and information about how long it was since it
> was updated the last time. This makes it possible to recalculate the target
> more often when pressure is high - and it simplifies kswapd too...
I'll supply a cute, efficient filter that does what you're doing with the
rubberband with a little stronger theoretical basis, as soon as I wake up
again. Or you can look for my earlier "Early flush with bandwidth
estimation" post. (Try to ignore the incorrect volatile handling please.)
BTW, you left out an interesting detail: any performance measurements
you've already done.
-- Daniel - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/