Would it make sense, when loading a time from disk, for the low order,
non-stored bits of the time to be initialised high rather than low.
i.e. to 999,999,999 rather than 0.
This way time stamps would never seem to jump backwards, only
forwards, which seems less likely to cause confusion and will mean that a
change is not missed (I'm thinking NFS here where cache correctness
depends heavily on mtime).
Also, would it make sense, for filesystems that don't store the full
resolution, to make that forward jump appear as early as
possible. i.e. if the mtime (ctime/atime) is earlier than the current
time at the resoltion of the filesystem, then make the mtime appear to
be what it would be if reloaded from storage... Maybe an example
would help.
Assuming an internal resolution on 1millisecond (to save on digits)
and a stored resolution of 1 second
time change is made Apparent timestamp
23.100 X 23.100
23.200 23.100
23.300 X 23.300
23.500 X 23.500
23.900 23.500
24.001 23.999
25.000 23.999
Thus the only incorrect observation that an application can make is
that there is an extra change at the end of a second when other
changes were made. I think this is better than an apparent change
suddenly becoming visible many minutes after the time of that apparent
change, and definately better than a timestamp moving backwards.
NeilBrown
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/