I need to finish one thing in the next two days, so it won't be before
Thursday probably, sorry.
>
> All (most) of my preempt Test were running with your -aa VM and I saw the
> speed up with your VM _AND_ preempt especially for latency (interactivity,
> system start time and latencytest0.42-png). O(1) gave additional "smoothness"
>
> What should I run for you?
>
> Below are the dbench 32 (yes, I know...) numbers for 2.4.18-pre3-VM-22 and
> 2.4.18-pre3-VM-22-preempt+lock-break.
> Sorry, both with O(1)...
>
> 2.4.18-pre3
> sched-O1-2.4.17-H7.patch
> 10_vm-22
> 00_nanosleep-5
> bootmem-2.4.17-pre6
> read-latency.patch
> waitq-2.4.17-mainline-1
> plus
> all 2.4.18-pre3.pending ReiserFS stuff
>
> dbench/dbench> time ./dbench 32
> 32 clients started
> ..............................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................+..........................................................................................................................................................................................................................................................++....................................................................................................++....
...............................+................................+...+...+........................................+..+......+................+...........++....................++++...++..++.....+...+....+........+...+++++********************************
> Throughput 41.5565 MB/sec (NBQ.9456 MB/sec 415.565 MBit/sec)
> 14.860u 48.320s 1:41.66 62.1% 0+0k 0+0io 938pf+0w
>
>
> preempt-kernel-rml-2.4.18-pre3-ingo-2.patch
> lock-break-rml-2.4.18-pre1-1.patch
> 2.4.18-pre3
> sched-O1-2.4.17-H7.patch
> 10_vm-22
> 00_nanosleep-5
> bootmem-2.4.17-pre6
> read-latency.patch
> waitq-2.4.17-mainline-1
> plus
> all 2.4.18-pre3.pending ReiserFS stuff
>
> dbench/dbench> time ./dbench 32
> 32 clients started
> ..................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................+...................................+.....................................................................+...............................................................................................................................+.....+........................................+........................+............................................................................+...................................................
........+..............+...................+........+.......+...............+...............+.....+..................+..+......+...++.........+....+..+...+....+......+.....................................+.+..+.......++********************************
> Throughput 47.0049 MB/sec (NBX.7561 MB/sec 470.049 MBit/sec)
> 14.280u 49.370s 1:30.88 70.0% 0+0k 0+0io 939pf+0w
It would also be nice to see what changes by replacing lock-break and
preempt-kernel with 00_lowlatency-fixes-4. Also you should have a look
at lock-break and port the same breaking point on top of
lowlatency-fixes-4, but just make sure lock-break doesn't introduce the
usual live locks that I keep seeing over the time again and again,
I'_ve_ to reject some of those lowlat stuff because of that, at the very
least one variable on the stack should be used so we keep going at the
second/third/whatever pass, lock-break seems just a big live-lock thing.
Also a pass with only preempt would be interesting. You should also run
more than one pass for each kernel (I always suggest 3) to be sure there
are no suprious results.
thanks,
Andrea
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/