hmm, is this the bugfix you mean? that shouldn't really matter to me as
far I can tell, I did it in an alternate way since the first place.
diff -urN 2.4.17pre8/mm/vmscan.c 2.4.17/mm/vmscan.c
--- 2.4.17pre8/mm/vmscan.c Fri Nov 23 08:21:05 2001
+++ 2.4.17/mm/vmscan.c Fri Dec 21 20:06:55 2001
@@ -338,7 +338,7 @@
{
struct list_head * entry;
int max_scan = nr_inactive_pages / priority;
- int max_mapped = nr_pages << (9 - priority);
+ int max_mapped = min((nr_pages << (10 - priority)), max_scan / 10);
spin_lock(&pagemap_lru_lock);
while (--max_scan >= 0 && (entry = inactive_list.prev) != &inactive_list) {
furthmore I hate those "10" hardwirded magic numbers that you keep
adding. The less of them the better. At least I put those magics in
sysctl.
see what my max_mapped is:
int orig_max_mapped = SWAP_CLUSTER_MAX * vm_mapped_ratio,
It is controlled by the vm_mapped_ratio and by the swap-cluster. So we
unmap one swap cluster at every vm_mapped_ratio of pages scanned that
were mapped. This ensure we unmap when there's some relevant work to do.
The lower the vm_mapped_ratio, the earlier the kernel will start
swapping/paging. (ah, and of course also the SWAP_CLUSTER_MAX would
better be a sysctl but it isn't yet)
Andrea
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/