> Pete> The problem with errors from close() is that NOTHING SMART can be
> Pete> done by the application when it receives it. And application can:
>
> Pete> a) print a message "Your data are lost, have a nice day\n".
> Pete> b) loop retrying close() until it works.
> Pete> c) do (a) then (b).
>
> I must disagree with you. We run the Andrew File System (AFS), which
> has client-side caching with write-on-close semantics. If an error
> occurs goes wrong at close() time, a well-written application can
> actually do something useful - such as sending an alert, or letting
> the user know the action failed.
The above is an example of an application covering up for
a filesystem that breaks the general expactions for the
operating environment. Remember your precursor with a broken
driver who received his beating a couple of months ago.
He also had an appliction which processed his errors from
close just fine. A workaround can be done in every specific
instance, but it does not make this practice any smarter.
What AFS designers should have done if they had a brain larger
than a pea was:
1. Make close to block indefinitely, retrying writes.
Allow overlapping writes, let clients to sort it out.
2. Provide an ioctl to flush writes before close() or
make fsync() work right. Your "smart" applications have had
to use that, so that no ambiguity existed between tearing down
the descriptor and writing out the data.
This way, naive applications such as cat and cc would
continue to work. There is no reason to penalize them just
because some application _could_ possibly post idiotic alerts
(Abort, Retry, Fail).
-- Pete
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/