The Linux core *is* tiny, and for that reason attractive for this purpose.
I agree that it is still too complex to verify formally, and we've suffered
because of that, i.e., when will we see the last truncate race? When will we
see a VM that doesn't fall over in corner cases? I don't accept 'never' as
an anwswer to this. Much of our current work - removal of the buffer cache,
elimination of buffer heads in most roles, coalescing of the ide and scsi
block flavors, introduction of reverse-mapping in the vm - moves the kernel
in the direction of less complexity, in the sense that state transitions and
subsystem interactions become easier to audit.
The day of the tiny, single purpose OS-let in industrial applications is
pretty much over. To illustrate, among the requirements we had was to
support modern hardware such as accelerated video cards, and high level
functions such as tcp stacks. We even had to run a database on the machine,
and a GUI. Don't even think of asking the hardware engineers to design a
two-processor system so that one of them can run a simplified OS. They won't
do it, because then they know perfectly well that one processor will do the
job, and it did, is doing it today. Reliably, and on what OS? Dos.
Surely if DoS can do it, then Linux can do it better. But not if we admit
defeat before starting.
-- Daniel - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/