On UML and mainframe Linux, *any* periodic clock tick
is heavy overhead when you have a large number of
(mostly idle) instances of Linux running, isn't it?
I think I once heard those architectures went to great lengths
to avoid periodic clock ticks. (My memory is rusty, though.)
How about this: let's apply the high-resolution timer patch,
which adds explicit timer events inbetween the normal 100 Hz
events when needed to satisfy precise sleep requests. Then
let's increase the interval between the normal periodic clock
events from 10ms to infinity. Everything will keep working,
as the high-resolution timer patch code will schedule timer
events as needed -- but suddenly we'll have power consumption
as low as possible, snappier performance, and the thousands-of-instances
case will no longer have this huge drain on performance from
periodic timer events that do nothing but update jiffiers.
OK, so I'm just an ignorant member of the peanut gallery, but
I'd like to hear a real kernel hacker explain why this isn't
the way to go.
- Dan
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/