It's pretty easy to do in 2.5 :-). A 2.4 backport is of course feasible,
but requires a bit more work (and possibly abstracting some of the
elevator stuff there).
> > I'm not going to go into great detail about how it works, see Andrea's
> > initial post of the paper referenced. This version may not be completely
> > true to the SFQ concept, but should be close enough I think. It divides
> > traffic into a fixed number of buckets (64 per default), and perturbs
> > the hash every 5 seconds (hash shamelessly borrowed from networking atm,
> > see comment).
>
> I tend to think 5 seconds is too small, 30 sec would be better IMHO (it
> should be tested at bit).
It probably is too small, testing will show. I don't see too many
collisions from a dbench 32 or 64, so...
> > To avoid too many disk seeks, when it's time to dispatch requests to the
> > driver, we round robin all non-empty buckets and grab a single request
> > from each. These requests are sorted into the dispatch queue.
> >
> > For performance reasons, io scheduler request merging is still a
> > per-queue function (and not per-bucket).
>
> Unsure if it worth, but it probably it won't make that much difference,
> likely different workloads are working on different part of the disk
> anyways.
The rate of unrelated merging is typically quite low, so no it probably
doesn't provide much of a performance benefit. However, it also keeps
the code simpler to simply have a single merge hash per queue.
> > In closing, let me stress that this version has not really been tested
> > all that much. It passes simple SCSI and IDE testing, should work on any
> > hardware basically.
>
> How does it feel?
I don't know yet, haven't booted it on my work station yet. Will do so
soon :)
-- Jens Axboe- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/