Well, we actually _do_ have that other way already - that should be, after
all, the whole point in the request allocation.
It's when we allocate the request that we know whether we already have too
many requests pending.. And we have the batching there too. Maybe the
current maximum number of requests is just way too big?
[ Quick grep later ]
On my 1GB machine, we apparently allocate 1792 requests for _each_ queue.
Considering that a single request can have hundreds of buffers allocated
to it, that is just _ridiculous_.
How about capping the number of requests to something sane, like 128? Then
the natural request allocation (together with the batching that we already
have) should work just dandy.
Ben, willing to do some quick benchmarks?
Linus
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/