I have a potential cause here, but I'm not sure if it makes sense. The
following code (slightly reformatted) is taken from locks.c in the
Mandrake 2.4.19-16mdk kernel.
static void locks_wake_up_blocks(struct file_lock *blocker,
unsigned int wait)
{
while (!list_empty(&blocker->fl_block)) {
struct file_lock *waiter = list_entry(blocker->fl_block.next,
struct file_lock, fl_block);
if (wait) {
locks_notify_blocked(waiter);
/* Let the blocked process remove waiter from the
* block list when it gets scheduled.
*/
current->policy |= SCHED_YIELD;
schedule();
} else {
/* Remove waiter from the block list, because by the
* time it wakes up blocker won't exist any more.
*/
locks_delete_block(waiter);
locks_notify_blocked(waiter);
}
}
}
It appears that if this function is called with a wait value of zero,
all of the waiting processes will be woken up before the scheduler gets
called. This means that the scheduler ends up picking which process
runs rather than the locking code.
Looking through the file, there is no call chain on an unlock or on
closing the last locked fd which can give a nonzero wait value, meaning
that we will always end up with the scheduler making the decision in
these cases.
Am I missing something?
Chris
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/