On Wed, Jul 09, 2003 at 11:07:55AM -0400, David Ford wrote:
> No such thing exists. I can have 10,000 processes doing nothing and
> have a load average of 0.00. I can have 100 processes each sucking cpu
> as fast as the electrons flow and have a dead box.
Well, like I said, in this specific case we talk about a fork bomb, not
a bunch of idle processes. My question is what the upper limit to set,
in order to ensure that processes that do nothing but "while (1)
fork();" do not take down the system. Apparently 2047 is too high for
2.4.21, at least on my system. But, a slower box manages a 2047 ulimit
fine with a 2.4.20 kernel.
> Learn how to manage resource limits and you can tuck another feather
> into your fledgeling sysadmin hat ;)
I already know how to manage the limits, but I am asking why the system
seems to hang indefinitely when a maximum of 2047 is set, but not when
e.g. 1500 is set. Do you have any idea? Why would there be such a
large change in behavior with such a small change in parameter?
Furthermore, why does my (slower, 600 < 800mhz) system running 2.4.20
kill off a fork bomb at a 2047 ulimit instantaneously, but 2.4.21 takes
half an hour or more, at which point I give up?
-- Ryan Underwood, <nemesis at icequake.net>, icq=10317253 - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/