Actually, up to 30 cpus, mixing and matching pairs of 12 or 16 MHz 80386,
25 or 50 MHz 80486, or 60 MHz Pentiums on the same SMP bus (known as
the Symmetry 2000 series).
NUMA-Q stuff went to 64 processors - with quad building blocks (4
identical processors) where you could mix and match Pentium Pro, Xeon,
Pentium III quads at various clock rates.
The internals of the OS looked a lot like any standard OS, with a few
pieces of secret sauce (e.g. the read-copy-update stuff - see
sf.net/projects/lse, multiprocessor counters, etc.) Code complexity
was not orders of magnitude higher to support these high end machines
with near linear scalability. It had some areas that were more complex,
but you train your staff on the complexitities, and it becomes standard
practice. Just like Linux is enormously more complex than the first DOS
OS's, DYNIX/ptx is somewhat more complex than Linux. But is Linux already
so complex that no one can contribute? I don't think so. Could DOS
engineers contribute to Linux? I think they have - with some updated
training and knowledge about the additional complexities.
I really don't think Larry's fears can only be solved by another new
OS rewrite. In some cases, just awareness of how to use the next
level of complex technology is enough to evolve the programmer to the
next stage. It is happening all the time. Doesn't mean that a fresh
approach is bad, either. But the premise that engineers will never be
smart enough to program for high count SMP is rediculous. The level
of complexity just isn't that much higher than where we are today.
gerrit
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/