I'm working on an enhanced version of Apache and I'm hitting my head
against something I don't understand.
I've found a (to me) unexplicable system behaviour when the number of
Apache forked instances goes somewhere beyond 1050, the machine
suddently slows down almost top a halt and becomes totally unresponsive,
until I stop the test (SpecWeb).
Profiling the kernel shows that the scheduler and the interrupt handler
are taking most of the CPU time.
I understand that there must be a limit to the number of processes that
the scheduler can efficiently handle, but I would expect some sort of
gradual performance degradation when increasing the number of tasks,
instead I observe that by increasing Apache's MaxClient linit by as
little as 10 can cause a sudden transition between smooth working with
lots (30-40%) of CPU idle to a total lock-up.
Moreover the max number of processes is not even constant. If I increase
the server load gradually then I manage to have 1500 processes running
with no problem, but if the transition is sharp (the SpecWeb case) than
I end-up having a lock up.
Anybody seen this before? Any clues?
- Fabio
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/