Yes, I believe this is what RTLinux and RTIA do. The sti macro then
needs to check for pending interrupt work, much as the bh_unlock does.
>
> I have more problems with the semantical difference of cli/sti
> vs. spinlocks on nesting. A cli/sti will never block anything but
> interrupts, a spinlock will block the caller on contention.
>
> This should be measured. Unfortunately I don't have fast enough
> hardware to notice a difference.
>
>
>>The better thing to do, IMHO, is to move the work off the interrupt
>>path where only spin locks (and preemption locks) are required.
>
>
> This is done incrementally already, so its only a matter of time.
>
>
>>Another possibility is to make more use of the new read/write stuff
>>that is now being used for the xtime lock. The idea here is that we
>>don't care if an interrupt (or the other machine) visits this data
>>while we are here as long as we know about it and can then redo what
>>we are doing. This works very well for fetching a few words that need
>>to be fetch atomically WRT the lock. If the fetch is not atomic (i.e.
>>was interrupted), just try again.
>
>
> Not suitable for drivers. They must read the registers, set some
> other registers and ACK the IRQ in the ISR. There is no way
> around that.
The lock has two sides, the reader and the writer. The writer still
takes the irq/ spinlock, only the reader gets the speed up.
>
> If the maximum latency between ISR and process context work is
> small and definable by the drivers, people would offload more.
>
> So if there would be a schedule_work_deadline() then this would
> be nice. The routine called later in process context called will
> be noticed, if the deadline is missed or not and can act
> correctly.
>
> This will simplify IDE (which needs those small timeouts and
> could act like the stuff timed out, if the deadline is missed),
> this will simplify the deadline-scheduler for IO and allow an
> additional scheduling method for user space.
>
> The scheduler changes would be one additional queue and it could
> also be depth limited, since later deadlines can be added later.
>
> We currently have timers, but they are not suitable for doing the
> work only for triggering it and that's a source of complexity in
> driver design.
Something of this sort is present in the workqueues design. There is
a schedule work for later call.
>
>
>>I haven't measured or heard of
>>measurements on this change, but I suspect that there is a significant
>>reduction in the time to do gettimeofday() because the cli/sti is not
>>on the read path any more.
>
>
> I agree that these historical kind of code should be changed,
> but modern drivers, which need to protect from IRQs of this
> device happening while changing some registers could not do this.
>
> They really need the spin_lock_irqsave() (which does the cli
> implicitly).
>
> Regards
>
> Ingo Oeser
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
>
-- George Anzinger george@mvista.com High-res-timers: http://sourceforge.net/projects/high-res-timers/ Preemption patch: http://www.kernel.org/pub/linux/kernel/people/rml- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/