This still doesn't make sense if the disk bandwidth isn't being used.
> Maybe we should just see if anything in the first few MB
> of inactive pages was freeable, limiting the first scan to
> something like 1 or maybe even 5 MB maximum (freepages.min?
> freepages.high?) and flushing as soon as we find more unfreeable
> pages than that ?
There are two cases, file-backed and swap-backed pages.
For file-backed pages what we want is pretty simple: when 1) disk bandwidth
is less than xx% used 2) memory pressure is moderate, just submit whatever's
dirty. As pressure increases and bandwidth gets loaded up (including read
traffic) leave things on the inactive list longer to allow more chances for
combining and better clustering decisions.
There is no such obvious answer for swap-backed pages; the main difference is
what should happen under low-to-moderate pressure. On a server we probably
want to pre-write as many inactive/dirty pages to swap as possible in order
to respond better to surges, even when pressure is low. We don't want this
behaviour on a laptop, otherwise the disk would never spin down. There's a
configuration parameter in there somewhere.
-- Daniel - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/