No. It is not the data structure. I did not change it for the
experimental code. This, of course, could lead to some additional
overhead looking up the next timer to figure when the next interrupt is
needed. I used the code from the IBM tick less patch which did a brute
force search. In actual fact this is not too bad as I also force a
timer every second, so at most ten lists need to be examined. The
instrumentation on this look up hardly shows up, and of course, it is
only needed after an interrupt.
The overhead comes from setting up a timer for the "slice" on every
schedule. Schedule(), in a busy system, gets called often, and "slices"
do not often expire. The other issue is additional overhead for the
jiffie (which is a function). On review of this code, I see that it
needs to protect it self from interrupt, so as written in the
experimental code, it is wrong and could loose track of time (however,
it is faster as written). Now neither of these operations take very
long, it is just that they are called every schedule, which is called
often enough to make a difference. Those who optimized the scheduler
with gotos and such knew what they were doing! It is a function that is
called enough to make the optimization worth while.
For those who would like to try the experimental system it can be found
at:
http://sourceforge.net/projects/high-res-timers
where it is called the "no HZ tick test". Do read the release notes to
understand what you are getting.
George
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/