Thanks for pointing this out. It's happening because the VM in
2.5.39 tries to avoid stalling tasks for too long.
That works well, so qsbench just gets in and submits more reads
against the swapdevice much earlier than it used to. The new IO
scheduler then obligingly promotes the swap reads ahead of the
swap writes and we end up doing a ton of seeking.
The -mm patchset has some kswapd improvements which pull most
of the difference back.
Stronger fixes for this would be a) penalise heavily-faulting
tasks and b) tag swap writeout as needing higher priority at the
block level.
I'll take a look at some preferential throttling later on. But
I must say that I'm not hugely worried about performance regression
under wild swapstorms. The correct fix is to go buy some more
RAM, and the kernel should not be trying to cater for underprovisioned
machines if that affects the usual case.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/