> Of course, checking errors in order to handle them sanely is a good
> thing. Nobody is arguing that. What I am arguing is that failing to
> check errors when they can "never happen" is wrong.
Actually, checking for _all_ even remotely possible and checkable error
conditions (if the check doesn't incur an intolerable overhead) is a very very
important requirement for writing high quality code; even if it isn't "fault
tolerant" (because it may not know how to recover, as with the ill-defined
semantics of close() returning error), it will at least be "fail-fast"; giving
an error message close to the cause and terminate in a co-ordinated manner
before corrupting data.
It troubles me deeply that some people hacking on the Linux kernel do not
consider this a good thing.
And with that, I conclude my point and step out of the discussion for good.
Sincerely,
Lars Marowsky-Brée <lmb@suse.de>
-- Immortality is an adequate definition of high availability for me. --- Gregory F. Pfister- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/