In some *brief* testing here, it appears that the use-once changes
make an improvement for light-medium workloads. With swap-intensive
workloads, the (possibly accidental) changes to swapcache aging
in fact improve things a lot, and use-once makes things a little worse.
This is a modified Galbraith test: 64 megs of RAM, `make -j12
bzImage', dual CPU:
2.4.7: 6:54
2.4.7+Daniel's patch 6:06
2.4.7+the below patch 5:56
--- mm/swap.c 2001/01/23 08:37:48 1.3
+++ mm/swap.c 2001/07/25 04:08:59
@@ -234,8 +234,8 @@
DEBUG_ADD_PAGE
add_page_to_active_list(page);
/* This should be relatively rare */
- if (!page->age)
- deactivate_page_nolock(page);
+ deactivate_page_nolock(page);
+ page->age = 0;
spin_unlock(&pagemap_lru_lock);
}
This change to lru_cache_add() is the only change made to 2.4.7,
and it provides the 17% speedup for this swap-intensive load.
With the same setup, running a `grep -r /usr/src' in parallel
with a `make -j3 bzImage', the `make -j3' takes:
2.4.7: 5:13
2.4.7+Daniel: 5:03
2.4.7+the above patch: 5:16
With the same setup, running a `grep -r /usr/src' in parallel
with a `make -j1 bzImage', the `make -j1' takes:
2.4.7: 9:25
2.4.7+Daniel: 8:55
2.4.7+the above patch: 9:35
So with lighter loads, use-once is starting to provide benefit, and the
deactivation is too aggressive.
> This is a _new_ thing, and I would like to know how that is changing the
> whole VM behaviour..
Sure. Daniel's patch radically changes the aging of swapcache
and other pages, and with some workloads it appears that it is
this change which brings about the performance increase, rather
than the intended use-once stuff.
I suspect the right balance here is to take use-once, but *not*
take its changes to lru_cache_add(). That's a separate thing.
Seems that lru_cache_add() is making decisions at a too-low level, and
they are sometimes wrong. The decision as to what age to give the page
and whether it should be activated needs to be made at a higher level.
-
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/