> Also, you did not assure me that you interpret problem correctly.
> netif_rx() is __insensitive__ to latencies due to blocked softirq
restarts.
> It stops spinning only when it becomes true cpu hog. And scheduling
ksoftirqd
> is the only variant here.
Hi Ingo at al,
Attached is the approach I sent to l-k earlier this week: if a timer tick
goes off, back off to ksoftirqd.
This should, statistically, do the right thing, even in the case of smart
softirqs like netif_rx.
Please consider,
Rusty.
--- linux-pmac/kernel/softirq.c Sun Sep 9 15:11:37 2001
+++ working-pmac-ksoftirq/kernel/softirq.c Mon Sep 24 09:44:07 2001
@@ -63,11 +63,12 @@
int cpu = smp_processor_id();
__u32 pending;
long flags;
- __u32 mask;
+ long start;
if (in_interrupt())
return;
+ start = jiffies;
local_irq_save(flags);
pending = softirq_pending(cpu);
@@ -75,32 +76,32 @@
if (pending) {
struct softirq_action *h;
- mask = ~pending;
local_bh_disable();
-restart:
- /* Reset the pending bitmask before enabling irqs */
- softirq_pending(cpu) = 0;
+ do {
+ /* Reset the pending bitmask before enabling irqs */
+ softirq_pending(cpu) = 0;
- local_irq_enable();
+ local_irq_enable();
- h = softirq_vec;
+ h = softirq_vec;
- do {
- if (pending & 1)
- h->action(h);
- h++;
- pending >>= 1;
- } while (pending);
-
- local_irq_disable();
-
- pending = softirq_pending(cpu);
- if (pending & mask) {
- mask &= ~pending;
- goto restart;
- }
+ do {
+ if (pending & 1)
+ h->action(h);
+ h++;
+ pending >>= 1;
+ } while (pending);
+
+ local_irq_disable();
+
+ pending = softirq_pending(cpu);
+
+ /* Don't spin here forever... */
+ } while (pending && start == jiffies);
__local_bh_enable();
+ /* If a timer tick went off, assume we're overloaded,
+ and kick in ksoftirqd */
if (pending)
wakeup_softirqd(cpu);
}
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/