> I still don't understand the current obsession with this stuff. It is
> easy to have pid_max 2^30 and a fast algorithm that does not take any
> more kernel space.
it's only an if(unlikely()) branch in a 1:4096 slowpath to handle this, so
why not? If it couldnt be done sanely then i wouldnt argue about this, but
look at the code, it can be done cleanly and with very low cost.
> It seems to me you are first creating an unrealistic and unfavorable
> situation (put pid_max at some artificially low value, [...]
we want the default to be low, so that compatibility with the older SysV
APIs is preserved. Also, why use a 128K bitmap to handle 1 million PIDs on
a system that has at most 1000 tasks running? I'd rather use an algorithm
that scales well from low pid_max to a larger pid_max as well.
> Please leave pid_max large.
why? For most desktop systems even 32K PIDs is probably too high. A large
pid_max only increases the RAM footprint. (well not under the current
allocation scheme but still.)
Ingo
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/