> >
> >The latency results are better, with average time spent in
> >__get_request_wait being around 28 jiffies, and a max of 170 jiffies.
> >The cost is throughput, further benchmarking needs to be done but, but I
> >wanted to get this out for review and testing. It should at least help
> >us decide if the request allocation code really is causing our problems.
> >
>
> Well the latency numbers are good - is this with dbench 90?
>
Yes, that number was dbench 90, but dbench 50,90, and 120 gave about the
same stats with the final patch.
> snip
> >+
> >+static inline int queue_full(request_queue_t *q, int rw)
> >+{
> >+ rmb();
> >+ if (rw == READ)
> >+ return q->read_full;
> >+ else
> >+ return q->write_full;
> >+}
> >+
> >
>
> I don't think you need the barriers here, do you?
>
I put the barriers in early on when almost all the calls were done
outside spin locks, the current flavor of the patch only does one
clear_queue_full without the io_request_lock held. It should be enough
to toss a barrier in just that one spot. But I wanted to leave them in
so I could move things around until the final version (if there ever is
one ;-)
-chris
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/