This is essentially the same struct as mine. I used the pid of the task,
where you use the address of the task. You use an atomic count, whereas
I used an ordinary int, guarded by the embedded spinlock.
> #define nestlock_lock(snl) \
> do { \
> if ((snl)->uniq == current) { \
That would be able to read uniq while it is being written by something
else (which it can, according to the code below). It needs protection.
Probably you want the (< 24 bit) pid instead and to use atomic ops on
it.
> atomic_inc(&(snl)->count); \
OK, that's the same.
> } else { \
> spin_lock(&(snl)->lock); \
> atomic_inc(&(snl)->count); \
> (snl)->uniq = current; \
Hmm .. else we wait for the lock, and then set count and uniq, while
somebody else may have entered and be reading it :-). You exit with
the embedded lock held, while I used the lock only as a guard to
make the ops atomic.
> } \
> } while (0)
>
> #define nestlock_unlock(snl) \
> do { \
> if (atomic_dec_and_test(&(snl)->count)) { \
> (snl)->uniq = NULL; \
> spin_unlock(&(snl)->lock); \
That's OK.
> } \
> } while (0)
Well, it's not assembler either, but at least it's easily comparable
with the nonrecursive version. It's essentially got an extra if and
an inc in the lock. That's all.
I suspect that the rwlock assembler can be coerced to do the job.
Peter
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/