I bet we can do this in a much simpler way with less
reliance on magic numbers. My theory goes as follows:
The problem with the current code is that the global
free target (freepages.high) is the same as the sum
of the per-zone free targets.
Because of this, we will always run into the local
free shortages and the VM has to eat free pages from
all zones and has no chance to properly balance usage
bettween the zones depending on VM activity in the
zone and desireability of allocating from this zone.
We could try increasing the _global_ free target to
something like 2 or 3 times the sum of the per-zone
free targets.
By doing that the system would have a much better
chance of leaving eg. the DMA zone alone for allocations
because kswapd doesn't just free the amount of pages
required to bring each zone to the edge, it would free
a whole bunch more pages, to whatever zone they happen
to be in. That way the VM would do the bulk of the
allocations from the least loaded zone and leave the
DMA zone (at the end of the fallback chain) alone.
I'm not sure if this would work, but just increasing
the global free target to something significantly
higher than the sum of the per-zone free targets
is an easy to test change ;)
regards,
Rik
-- Virtual memory is like a game you can't win; However, without VM there's truly nothing to lose...http://www.surriel.com/ http://distro.conectiva.com/
Send all your spam to aardvark@nl.linux.org (spam digging piggy)
- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/