> reacquire_kernel_lock(current);
> if (unlikely(current->need_resched))
> goto need_resched_back;
> return;
>
> The question is why do we reacquire the kernel lock before checking
> for need_resched?. If it is not needed, we can save a lock/unlock
> cycle in the case of need_resched.
that code is something i discovered when trying to reduce scheduling
latencies for the very first lowlatency patchset i released. Back then,
1-2-3 years ago, kernel_lock usage was still common in the kernel. So it
often happened that tasks were spending more than 1 msec spinning for the
kernel lock, and often they did that in the reacquire code. So to reduce
latencies, i've added ->need_resched checking to kernel_lock() and
reacquire_kernel_lock() as well.
these days kernel_lock contention doesnt happen all that often, so i think
we should remove it from the 2.5 tree. I consider the preemptible kernel
patch to be the most advanced method to get low scheduling-latencies
anyway.
> This code isn't new with the O(1) scheduler, so I'm guessing there is
> a need to hold the kernel_lock when checking need_resched. I just
> don't know what it is.
there is no need for it to be under the kernel_lock - i simply found it to
be an easy and common preemption point.
Ingo
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/