now if this was AF_INET via ethernet I could imagine the irqbalance
making difference (or even irqrate even if irqrate should make no
difference until your hardware hits the limit of irqs it can handle).
but both pipe and afunix should not generate any irq load (other than
the IPI with the reschedule_task wakeups at least, but they're only
dependent on the scheduler, ipi delivery isn't influenced by the
irqrate/irqbalance patches). it's all trasmission in software internal
to the kernel, with no hardware events so no irq, so I would be very
surprised if the irqbalance or irqrate could make any difference. I
would look elsewere first at least. No idea why you're looking at those
irq related patches for this workload.
At first glance I would say either it's a compiler issue that generates
some very inefficent code one way or the other (seems very unlikely but
cache effects can be quite huge in tight loops where a very small part
of the kernel is exercised), or it has something to do with schduler or
similar core non-irq related areas.
>
> A config difference between pre10-jam2 and pre10-jam3 is:
> CONFIG_X86_SFENCE=y # pre10-jam2
> pre10-jam2 was compiled with -Os and pre10-jam3 with -O2.
>
> > Out of interest, is that a P4/Xeon?
>
> Quad P3/Xeon 700 mhz with 1MB cache.
>
> --
> Randy Hron
> http://home.earthlink.net/~rwhron/kernel/bigbox.html
Andrea
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/