I have to disagree. The whole point of accurate timestamps is that a
program can reliably ask "are my assumptions about the contents of
this file still valid?", "do I have to read the file to revalidate my
assumptions?"
When you don't have accurate timestamps, or resolutions are mixed,
then it's not possible to answer this question. The only correct
behaviour for a program, such as a cacheing dynamic web server, a
cacheing JIT compiler or something like Make, is to round the
timestamp _up_.
Which is fine so long as you can write this in the application code:
if (ts_nanoseconds == 0)
ts_nanoseconds = 1e9-1;
That's a rare enough occurrence when timestamps have nanosecond
accuracy that the the glitch is not a problem.
Unfortunately that application code breaks when the filesystem may
have timestamps with resolution better than 1 second, but worse than 1
nanosecond. Then the application just can't do the right thing,
unless it knows what rounding was applied by the kernel/filesystem, so
it can change that rounding in a safe direction.
So for applications which actually _depend_ on accurate timestamps for
reliability, I see only two valid things for the kernel to do:
1. Round timestamps _up_.
2. Or, round timestamps down _and_ store the rounding resolution in
struct stat, in addition to the timestamp.
I'm in favour of 1.
(With Paul's suggestion of just rounding down without telling me when
that happened, AFAICT my _reliable_ cacheing applications must simply
ignore the nanoseconds field, which is a bit unfortunate isn't it?)
-- Jamie
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/