Since 1970, there have been 27 leap seconds applied. But these don't show up anywhere in any computer system I have ever used (OS, languages, applications, third party APIs). If I create a date object of 1970-01-01T00:00:00 and repeatedly add 86400 seconds, should I not end up with a date/time that is no longer midnight?
I assume that we are collectively just ignoring leap seconds and hoping for the best. Is this OK?
The big reason to isolate this is because leap seconds aren't totally determinate. We can predict them to some degree, but ultimately rely on measurement. Unlike something like leap days, you cannot safely code a system that accounts for future leap seconds. Leap days follow a well-defined formula, leap seconds do not.
For time sensitive applications like navigational computers, TAI is used. TAI is currently 37 seconds off from UTC. TAI is associated with the SI Second unit. While originally derived from the solar day, it is now based on the vibrational frequency of a caesium atom.
For a more detailed explanation see http://mperdikeas.github.io/utc-vs-ut1-time.html for a good summary.
For an overview on how Google handles the smear see https://developers.google.com/time/smear
The timestamp on your computer is is in a timescale (called UNIX time) which is defined as not including leap-seconds, so no. The advantage of this system is that there is an algorithm for converting the integers to points on a calendar and back again. Kinda as you mentioned (ie. just keep adding 86400 to go forward a year, a bit more if it's a leap year, etc.). The downside is that you can set your timestamp integer to some value and it can refer to a second in UTC which happened twice (like those 27 leap seconds), or not at all (if a negative leap second is inserted in the future).
If you want to use UTC as your timescale, then you would have a different representation (something like TAI) and you would need to know about leap-seconds in order to do the integer-to-calendar-or-back-again type of calculations.
As with all things in engineering, it is a trade-off and what is the best choice depends very much on what you are trying to do.
In my work, I deal with dates and times, but a leap second would have no consequence. A negative leap second would be interesting, because database records could appear to have been created out of order (a possible problem for many apps), but my apps wouldn't care.
There are programmers that have to worry about leap seconds, I'm happy I'm not one of them.
That said, modern cloud environments do hide this problem for you with leap smearing [1], which seems like the ideal fix. It'd be nice to see the world move to smeared time by default so that one-day = 86400 seconds stays consistent (as does 1 second = 10e9 nano-seconds); but the length of the smallest subdivisions of time perceived by your computer varies intentionally as well as randomly.
Should leap seconds matter then? As long as you are synchronizing with the same source of truth as the financial institution's back-end, both clocks should tell the same time.
Of course and that’s why you shouldn’t do that. I don’t know about other languages, but if you want to advance a certain number of days, you don’t just add seconds. Any code review would pick that apart.
You add (or subtract) date components, meaning you specify a day, week or whatever unit.
Do I misunderstand the question?