The way the test code works is, it keeps mlocking more blocks of memory until
one of the mlocks fails, and then it does the rest of its work with that many
blocks of memory. It's hard to see how we could get a legitimate oom with
that strategy.
> can you reproduce on
> 2.4.14pre5aa1? mainline (at least before pre6) could deadlock with too
> much mlocked memory.
OK, he tried it with pre5aa1:
ben> My test application gets killed (I believe by the oom handler). dmesg
ben> complains about a lot of 0-order allocation failures. For this test,
ben> I'm running with 2.4.14pre5aa1, 3.5gb of RAM, 2 PIII 1Ghz.
*Just in case* it's oom-related I've asked Ben to try it with one less than
the maximum number of memory blocks he can allocate.
If it does turn out to be oom, it's still a bug, right?
-- Daniel - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/