Its the usual "soft RT" thing (SCHED_FIFO, mlockall), except that it
involves pipe-driven IPC between two RT threads to get the work
done. Read the website if you want to understand why; its not relevant
here.
Its clear from a week of instrumented kernels, and hours poring over
trace files, that the presence of "activity" on the machine causes
delays in the handling of interrupts from the audio interface by up to
a millisecond. Given that the interrupt frequency is about 1000Hz,
this is not good :)
I know from trials of older kernels and systems in which all the audio
handling resided in the same thread that we can handle a 1.3msec
interrupt interval without real problems. But now that JACK splits the
handling across two threads in different processes, the thing that
kills us is not the context switch times, not the delay caused by
cache and/or TLB invalidation, or any of that stuff. instead, its that
we start delaying the execution of the audio interface interrupt
handler to the point where our assumptions about handling every
interrupt on time fall apart.
the extra work doesn't involve any increased disk activity - the only
extra work involves more writing to memory (which is all mlocked),
writing to pipes for IPC, and extra context switches.
which of these is most likely to cause us to mask interrupts for up to
a millisecond or more? i know its not the handler for the interrupt in
question - it never takes more than 25 usecs to execute.
--p
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/