> That's what per-user process limits are for. Doesn't matter if it's a
> shellscript or something else; any system without limits set is
> vulnerable.
I agree, but it would also be nice to have a way to clean up after the
fact without giving up the box. My limit is set at 2047 processes
which, while being a lot, doesn't seem like enough to guarantee a dead
box. (Don't many busy systems have more than this number running at any
given time?)
> It's a base redhat kernel, after the cannot allocate memory, my system
> returned to normal operation and it didnt die.
> Is this the type of behavior you were looking for? or am i off base?
>
> Linux sloth 2.4.20-8 #1 Thu Mar 13 17:54:28 EST 2003 i686 i686 i386
> GNU/Linux
>
> $ :(){ :|:&};:
> [1] 3071
>
> $
> [1]+ Done : | :
>
> $ -bash: fork: Cannot allocate memory
> -bash: fork: Cannot allocate memory
> -bash: fork: Cannot allocate memory
> -bash: fork: Cannot allocate memory
Nope, on my system running stock 2.4.21, after hitting enter, wait about 2
seconds, and the system is frozen. Telnet connects but never gets a
shell. None of the SysRq process-killing combos have any effect. After
a few failed killalls (which eventually killed the one shell I was able
to get), and Alt-SysRq-S never completing the sync, I gave up and
Alt-SysRq-B.
What does ulimit -u say on your system? 2047 on mine.
-- Ryan Underwood, <nemesis at icequake.net>, icq=10317253 - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/