I have a server to which I constantly synchronize all of my important files as a read-only backup. Now, the issue is that when I make a change on device A, sync it to device B, make another change there, and then sync it to my server - then there is a reasonable chance that the timestamps of the two changes will either overlap or be in reverse order! This will, obviously, lead to data loss.
What boggles my mind is how easy this would be to fix! Windows already uses NTP to sync the system clock, but it seems to do this so rarely as to regularly accrue seconds of time drift. Why? NTP sync requires negligible amounts of system resources.
Even worse is my phone. It can obviously receive GPS signals for geolocation, and therefore has access to the most accurate time signal available. But it doesn't even use it! When my phone is disconnected from any network, it will simply get less and less accurate while throwing away a constant stream of atomic timestamps from the GPS antenna. And while it is connected to the network, it suffers from the same weird reluctance to sync itself to time servers as often as possible.
Genuinely, am I missing something? My old 30$ Casio radio watch has more accurate time than all the very much more sophisticated and expensive tech around me. Why?
I think most phones sync to the cellular network time, when available. GPS would allow for an alternate time source, but it's not always enabled and maybe nobody thought to feed it into the time source while the network isn't available.
On Windows, I think the default settings for time sync are amazingly low; probably Microsoft didn't want to foot the bill, as they do run the servers windows syncs from, it's a large population, so increasing the sync volume would probably be meaningful. Microsoft also used to run a time server in Washington state somewhere with really bad asymmetric delay, leading to pretty poor sync performance. But it looks ok now.
For example:
- Windows time settings says: "Last successful time synchronization: 3/7/2024..." (This was ~41 hours ago and not manually triggered.)
- https://time.is/: "Your clock is 0.4 seconds ahead."
I'm a little surprised there is a 0.4 second difference, but it doesn't seem to affect syncing via Dropbox/Google files.
Perhaps the answer is that time that is accurate to within a second or so of precision is good enough for most consumers.
(Yes, I care - I have run NTP servers including some of the first in banking - since long before it was fashionable, and some of my code is probably still in ntpd.)