> nevertheless we do lock up for 32 seconds if there are 32K PIDs allocated
> in a row and last_pid hits that range - regardless of pid_max. (Depending
> on the cache architecture it could take significantly more.)
What about randomizing the PID selection a bit? I.e., allocate PIDs
consecutively as long as they are free; if you hit an already used
PID, roll dice to find a new position where the search should be
continued. As long as the allocated fraction of PID space is reasonably
small, this algorithm should be very quick in average case.
Another possible solution: Divide PID space to blocks and for each
block, keep a counter of PID's available in this block and when
allocating, just skip blocks which are full. Runs in O(sqrt(PID space
size)) in the worst case.
Have a nice fortnight
-- Martin `MJ' Mares <mj@ucw.cz> http://atrey.karlin.mff.cuni.cz/~mj/ Faculty of Math and Physics, Charles University, Prague, Czech Rep., Earth This message is transmited on 100% recycled electrons. - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/