You're right. And I thought this code was simple :-)
I guess what suggests itself to me is something like:
static inline void spin_unlock(spinlock_t *lock)
{
#if SPINLOCK_DEBUG
if (lock->magic != SPINLOCK_MAGIC)
BUG();
if (!spin_is_locked(lock))
BUG();
+ lock->last_lock_processor = -1;
#endif
__asm__ __volatile__(
spin_unlock_string
:"=m" (lock->lock) : : "memory");
}
No longer any race (right?) and we don't lose anything since the one
processor is about to drop the lock it (presumably) held. I wonder if
it should check to make sure the same processor that grabbed the lock is
releasing it. Not exactly a bug, and somebody might write code like
that, but it seems suspicious. Comments?
-- -bwbBrent Baccala baccala@freesoft.org
============================================================================== For news from freesoft.org, subscribe to announce@freesoft.org: mailto:announce-request@freesoft.org?subject=subscribe&body=subscribe ============================================================================== - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/