Rusty, here are a few comments !
In futex_wait we have
kmap(page)
schedule_timeout()
kunmap()
Are there potential for deadlocks, the mappings can be held for a long time
and the KMAPs are limited.
Otherwise, kernel patch looks good. Will run it this weekend through
ulockflex.
Other changes proposed for FUTEX 2.0
---------------------
A) in futex_down_timeout
get ride of woken, don't see why you need that.
optimize the while statement. Unless there are some hidden gcc issues.
static inline int futex_down_timeout(struct futex *futx, struct timespec *rel)
{
int val, woken = 0;
/* Returns new value */
while ((val = __futex_down(&futx->count)) != 0) {
switch (__futex_down_slow(futx, val, rel)) {
case -1:
return -1; /* error */
case 0:
futx->count = -1; /* slept */
/* fall through */
case 1:
return 0; /* passed */
}
}
}
---------------------
Still missing something on the futex_trydown !!
futex_trydown ::= futex_down == 1 ? 0 : -1
So P1 holds the lock, P2 runs "while (1) { futex_trydown }" will decrement
the counter yielding at some point "1" and thus granting the lock.
At one GHz on 32 way system this only requires a lock hold time of a few
seconds. Doesn't sound like a good idea.
This brings back the discussion on compare and swap. This would be trivial to
do with compare and swap.
Another solution would be to sync through the kernel at wraparound, so that
the count variable reflects the number of waiting processes.
That's for now....
-- -- Hubertus Franke (frankeh@watson.ibm.com) - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/