> the underlying problem is very hard - since get_pid() not only has to take
> PIDs into account, but TGIDs, session IDs and process groups as well.
>
> Ie. even in the most hopeless situation, if there are 999,999 PIDs
> allocated already, it takes less than 10 usecs to find and allocate the
> remaining one PID. The common fastpath is a couple of instructions only.
> The overhead of skipping over continuous regions of allocated PIDs scales
> gracefully with the number of bits to be skipped, from 0 to 10 usecs.
>
> memory footprint of the new PID allocator scales dynamically with
> /proc/sys/kernel/pid_max: the default 32K PIDs cause a 4K allocation, a
> pid_max of 1 million causes a 128K footprint. The current absolute limit
> for pid_max is 4 million PIDs - this does not cause any allocation in the
> kernel, the bitmaps are demand-allocated runtime. The pidmap table takes
> up 512 bytes.
I still don't understand the current obsession with this stuff.
It is easy to have pid_max 2^30 and a fast algorithm that does not
take any more kernel space.
It seems to me you are first creating an unrealistic and unfavorable
situation (put pid_max at some artificially low value, starting a
lot of tasks and saying: look! the algorithm is quadratic!) and
then solve the problem that you thus invented yourself.
Please leave pid_max large.
Andries
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/