Thanks for the explanation. (I was off-line for a couple of days so
didn't get to respond earlier) This is closest to the case that I
was concerned about. Perhaps with merging/write-clustering at higher
levels the number of requests might be less, but that is secondary.
What was worrying me is that even when the rate of request taking
is greater than the rate of request freeing, couldn't we have every
1 out of N incoming tasks picking up a just freed up request, thus
starving the waiters ? (The rest N-1 incoming tasks might get to
wait their turn, where N:1 is the ratio of request taking to request
freeing)
>
> The final scenario is where there are many outgoing tasks, so they
> compete, and each gets an insufficiently large number of requests.
> I suspect this is indeed happening with the current kernel's wake-all
> code. But the wake-one change makes this unlikely.
>
> When an outgoing task is woken, we know there are 32 free requests on the
> queue. There's an assumption (or design) here that the outgoing task
> will be able to get a decent number of those requests. This works. It
> may fail if there are a massive number of CPUs. But we need to increase
> the overall request queue size anyway - 128 is too small.
Yes, especially when the system has sufficient resources to increase
the request queue size, one might as well queue up on the request
queue, rather than the wait queue, and minimize those context switches.
>
> Under heavy load, the general operating mode for this code is that
> damn near every task is asleep. So most work is done by outgoing threads.
>
> BTW, I suspect the request batching isn't super-important. It'll
> certainly decrease the context switch rate very much. But from a
> request-merging point of view it's unlikely to make much difference.
>
> -
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/