HACKER Q&A
📣 Jaxkr

People programming before 2000: why were 2 digit years a thing?


Recently dug out one of my dad’s old Apple laptops, and was amazed to find it only supported 2-digit years. And I was around as a young child for y2k, and I’ve been thinking about it a lot lately.

Specifically: how was it ever a problem? What kind of thinking led to people making software with 2-digit years, even in the late 80s? Did it really save a lot of effort?

The idea of someone sitting down and making the conscious choice to design an OS that uses 2-digit years is hilarious to me, but was there a good reason?


  👤 jenkstom Accepted Answer ✓
There were several reasons. One, everybody used two digit years for everything. When writing checks you used two digits. When you saw a date on television or in a book it used a two digit year. So it just made sense.

Secondly there was the idea that it would save space. And really it did. When your computer new out of the box had 3.5 KB available storage, those two extra characters were very important.

And thirdly because computers were new and things were changing so rapidly that everybody assumed everything would be replaced in just a few years. It's a bit difficult to explain, but electronic data was considered a lot "less real" then than it is now. When you switch the computer off it's gone. The idea that digital data would persist for years was difficult to grasp for a lot of people. Floppy disks had a realistic lifespan measured in months and later in years, but single digits for sure. It wasn't until optical drives became commonly available that it was possible for digital personal data to last very long at all.


👤 yongjik
To expand on another good comment: space was precious. Really precious. If you were only a kid in the 90s you will have a hard time appreciating it. (But then again ancient UNIX greybeards will say the same to me, I guess...)

For example, my first computer was Apple II. It had 48KB RAM and 12KB ROM which contained initial bootstrap code that corresponds to BIOS, plus a Basic interpreter, plus a REPL tool with disassembler. All of that, in 12KB. The main page of Hacker news is about 42KB, not including icons and HTTP headers: it's already about three times a BIOS+Basic interpreter+disassembler.

And the monitor only had 40x24 lines (or maybe 40x25, it was a long time ago). Nobody's mad enough to waste two columns, out of 40, just to print two digits that will be forever stuck at 19.

As another example, around that time Turbo Pascal was immensely popular, and every string started with a byte showing its length: so no string can be more than 255 bytes. Good enough: what kind of madman would waste precious memory on a string more than 256 characters?

If you go back to that period and think about the environment, using 4-digit years will be considered a madness, not the other way.


👤 DrScump
In earlier days of computing, storage was expensive. Saving two bytes for each date field multiplied by thousands of records across hundreds of files saved real money.

Another example of contortion for storage saving was Packed Decimal used in mainframes, where a (say) 11-digit number could be stored in 6 bytes rather than 11. There were even IBM 360/370 assembler math operations for packed decimal so they didn't have to be decided and re-encoded for arithmetic use.