Hi,
the following patch makes use of the fact that refill_inactive()
now calls swap_out() before calling refill_inactive_scan() and
the fact that the inactive_dirty list is now reclaimed in a fair
LRU order.
Background scanning can now be replaced by a simple call to
refill_inactive(), instead of the refill_inactive_scan(), which
gave mapped pages an unfair advantage over unmapped ones.
The special-casing of the amount to scan in refill_inactive_scan()
is removed as well, there's absolutely no reason we'd need it with
the current VM balance.
regards,
Rik
--
--- linux-2.4.6-pre3/mm/vmscan.c.orig Thu Jun 14 12:28:03 2001 +++ linux-2.4.6-pre3/mm/vmscan.c Fri Jun 15 11:55:09 2001 @@ -695,13 +695,6 @@ int page_active = 0; int nr_deactivated = 0;
- /* - * When we are background aging, we try to increase the page aging - * information in the system. - */ - if (!target) - maxscan = nr_active_pages >> 4; - /* Take the lock while messing with the list... */ spin_lock(&pagemap_lru_lock); while (maxscan-- > 0 && (page_lru = active_list.prev) != &active_list) { @@ -978,7 +971,7 @@ recalculate_vm_stats();
/* Do background page aging. */ - refill_inactive_scan(DEF_PRIORITY, 0); + refill_inactive(GFP_KSWAPD, 0); }
run_task_queue(&tq_disk);
- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/