that should be the effect of the first part of my vm updates that gone into 2.4.19pre.
> > > It appears from these results that there is no appreciable improvement
> > > using the
> > > fsync patch - there is a slight loss of top end on 4 adapters 1 drive.
> >
> > that's very much expected, as said with my new design by adding an
> > additional pass (third pass), I could remove the slight loss that I
> > expected from the simple patch that puts wait_on_buffer right in the
> > first pass.
> >
> > I mentioned this in my first email of the thread, so it looks all right.
> > For a rc2 the slight loss sounds like the simplest approch.
> >
> > If you care about it, with my new fsync accounting design we can fix it,
> > just let me know if you're interested about it. Personally I'm pretty
> > much fine with it this way too, as said in the first email if we block
> > it's likely bdflush is pumping the queue for us. the slowdown is most
> > probably due too early unplug of the queue generated by the blocking
> > points.
>
> I don't care about the very slight (and possibly in the noise floor of our
> test) reduction in throughput due to the fsync fix. I think your's and
> Andrews' assertion of the bdflush / dirty page handling getting stopped up
> is likely the problem preventing scaling to my personal goal of 250 to
> 300MB/sec on our setup.
yep, should be. Actually running multiple fsyncs from multiple tasks
will kind of workaround the single threaded async flushing of 2.4.
>
> Thanks,
>
> Mark Gross
> PS I had a very nice time on mount hood. I didn't make it to the top this
> time too much snow had melted off the top of the thing to have a safe attempt
> at the summit. It was a guided (http://www.timberlinemtguides.com) 3 day
> climb.
:)
Andrea
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/