Why does everybody think this is no *real* benchmark? When I remember
the good old days at the university the systems I tried to compile some
applications on were *always* overloaded. Would it make a difference for
you if I would run
for a in lots_of.srpm; do
rpm --rebuild $a &
done
Basically this gives the same result: lots of compile jobs running in
parallel. All *I* am doing is doing it a little extreme since running
the compile with make -j2 does not make a *noticable* difference at all.
And as I said previously my idea was to get the system into high memory
pressure and test the different vms (AA and RvR) ...
Furthermore some people think this combination (sched+preempt) is only
good for latency (if at all) all I can say is that this works *very*
well for me latency wise. Since I don't know how to measure latency
exactly I tried to run my compile script (make -j50) while running my
usual desktop + xmms. Result: xmms was *not* skipping, although the
system was ~70MB into swap and the load was >50. Changing workspaces
worked immedeatly all the time. But I was able to get xmms to skip for
a short while by starting galeon, StarOffice, gimp with ~10 pictures
all at the same time. But when all applications came up xmms was not
skipping any more and the system was ~130MB into swap. This is the best
result for me so far but I have to admit that I did not test mini-ll
+sched in this area (I can test this earliest on wednesday, sorry).
Since it is a little while since I posted my system specs here they are:
- Athlon 1.2GHz (single proc)
- 256 MB
- IDE drive (quantum)
> also, I've often found the user/sys/elapsed components to all be interesting;
> how do they look? (I'd expect preempt to have more sys, for instance.)
13-pre5aa1 18-pre2aa2 18-pre3 18-pre3s 18-pre3sp 18-pre3smini
(sys) (user)
j100: 30.78 297.07 32.40 294.38 * 27.74 296.02 27.55 292.95 28.30 297.67
j100: 30.92 297.11 33.04 295.15 * 29.14 296.25 26.88 292.77 28.13 296.44
j100: 29.58 297.90 34.01 294.16 * 27.56 295.76 26.25 293.79 27.96 296.47
j100: 30.62 297.13 32.00 294.30 * 28.47 296.46 27.64 293.42 27.50 297.47
j100: 30.48 299.43 32.28 295.42 * 27.77 296.44 27.53 292.10 27.23 297.24
As expected the system and the user times are almost identical. The "fastest"
compile results are always where the job gets the most %cpu time. So I guess
it would be more interesting to see how much cpu time e.g. kswapd gets.
Probably I have to enhance my script to run vmstat in the background ...
Would this provide useful data?
Regards,
Jogi
--Well, yeah ... I suppose there's no point in getting greedy, is there?
<< Calvin & Hobbes >> - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/