That's the case in most of my "benchmarks".
e.g. dbench 32:
2.4.18-pre3
sched-O1-2.4.17-H7.patch
10_vm-22
00_nanosleep-5
bootmem-2.4.17-pre6
read-latency.patch
waitq-2.4.17-mainline-1
plus
all 2.4.18-pre3.pending ReiserFS stuff
Throughput 41.5565 MB/sec (NB=51.9456 MB/sec 415.565 MBit/sec)
14.860u 48.320s 1:41.66 62.1% 0+0k 0+0io 938pf+0w
with preempt+lock-break
Throughput 47.0049 MB/sec (NB=58.7561 MB/sec 470.049 MBit/sec)
14.280u 49.370s 1:30.88 70.0% 0+0k 0+0io 939pf+0w
user: nearly equally
system: 1 second more with preempt+lock-break
all: 11 less with preempt+lock-break
> Then for the preemptable kernel to do the job
> faster something else must go up, idle time perhaps. If this is the
> case, then there is some place in the kernel that is wasting cpu time
> and that is preemptable and the preemptable patch is moving this idle
> time to the idle process.
> What ever the reason, while I do want to promote preemption, I think we
> should look at this issue and, at the very least, explain it.
What do you think about IO-wait for top/ps like SUN have?
Do you think we lost CPU cycles, there?
-- Dieter Nützel Graduate Student, Computer ScienceUniversity of Hamburg Department of Computer Science @home: Dieter.Nuetzel@hamburg.de - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/