> Marcelo Tosatti wrote:
> >
> > Daniel's patch adds "drop behind" (that is, adding swapcache
> > pages to the inactive dirty) behaviour to swapcache pages.
>
> In some *brief* testing here, it appears that the use-once changes
> make an improvement for light-medium workloads. With swap-intensive
> workloads, the (possibly accidental) changes to swapcache aging
> in fact improve things a lot, and use-once makes things a little worse.
>
> This is a modified Galbraith test: 64 megs of RAM, `make -j12
> bzImage', dual CPU:
>
> 2.4.7: 6:54
> 2.4.7+Daniel's patch 6:06
> 2.4.7+the below patch 5:56
>
> --- mm/swap.c 2001/01/23 08:37:48 1.3
> +++ mm/swap.c 2001/07/25 04:08:59
> @@ -234,8 +234,8 @@
> DEBUG_ADD_PAGE
> add_page_to_active_list(page);
> /* This should be relatively rare */
> - if (!page->age)
> - deactivate_page_nolock(page);
> + deactivate_page_nolock(page);
> + page->age = 0;
> spin_unlock(&pagemap_lru_lock);
> }
>
> This change to lru_cache_add() is the only change made to 2.4.7,
> and it provides the 17% speedup for this swap-intensive load.
After some thoughs I think this is due to swapin readahead.
We add "drop behind" behaviour to swap readahead, so we have less impact
on the system due to swap readahead "misses".
That is nice but dangerous under swap IO intensive loads:
We start swapin readahead, bring the first page of the "cluster" in
memory, block on the next swap page's IO (the one's we are doing
readahead) and in the meantime the first page we read is reclaimed by
someone else.
Then we'll have to reread the first page at at the direct
read_swap_cache_async() called by do_swap_page().
I really want to confirm if this is happening right now. Hope to do it
soon.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/