Your comment prompted me to look at
linux-2.5.2-pre11/include/asm-i386/spinlock.h, and I now believe that
the "lock; decb" that it uses for grabbing spinlocks will return an
incorrect success if 255 or more processors are waiting on the same
spinlock. I don't know if anybody has ever built a shared memory x86
multiprocessor with 257 (not a typo) or more CPU's, but it's possible
to imagine. It's also possible to imagine this scenario happening
with even just one processor and a preemtable kernel. I believe that
the current preemtable kernel patch limits the number of preempted
processes to something like four or six, but that's just a temporary
limitation of the current version.
If we get the point where this scenario really could happen,
then maybe the spin lock counter type should be expanded from one byte
to four. I think we already assume four byte alignment in
asm-i386/semaphore.h, which uses a four byte count (I think "lock" is
not guaranteed to work across page boundaries).
Adam J. Richter __ ______________ 4880 Stevens Creek Blvd, Suite 104
adam@yggdrasil.com \ / San Jose, California 95129-1034
+1 408 261-6630 | g g d r a s i l United States of America
fax +1 408 261-6631 "Free Software For The Rest Of Us."
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/