Issue with max_threads (and other resources) and highmem

Dave McCracken (dmccr@us.ibm.com)
Tue, 23 Oct 2001 15:19:45 -0500


I recently had pointed out to me that the default value for max_threads (ie
the max number of tasks per system) doesn't work right on machines with
lots of memory.

A quick examination of fork_init() shows that max_threads is supposed to be
limited so its stack/task_struct takes no more than half of physical
memory. This calculation ignores the fact that task_structs must be
allocated from the normal pool and not the highmem pool, which is a clear
bug. On a machine with enough physical memory it's possible for all of
normal memory to be allocated to task_structs, which tends to make the
machine die.

fork_init() gets its knowledge of physical memory passed in from
start_kernel(), which sets it from mum_physpages. This parameter is also
passed to several other init functions.

My question boils down to this... Should we change start_kernel() to limit
the physical memory size it passes to the init functions to not include
high memory, or should we only do it for fork_init()? What is the best way
to do calculate this number? I don't see any simple way in
architecture-independent code to get the size of high memory vs normal
memory.

What's the best approach here?

Thanks,
Dave McCracken

======================================================================
Dave McCracken IBM Linux Base Kernel Team 1-512-838-3059
dmccr@us.ibm.com T/L 678-3059

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/