HACKER Q&A
📣 thesuperbigfrog

Designing a 50 Year Computer


How would you design a computing platform (hardware and software) that would last for 50 years or longer?

By "50 years or longer" I don't mean a single device that would last for 50 years, but rather an open design and specification that anyone could build and corresponding software to run on the implemented design. RISC-V and Raspberry Pi both come to mind, but will they be around for 50 years?

The design would need to be powerful enough to run desktop-level software (for example, Raspberry Pi 4 or better) and would need an open operating system that could be adapted to accommodate device drivers for whatever hardware is connected to the computing platform.

What are your thoughts and ideas?


  👤 brucehoult Accepted Answer ✓
If it's not a single device then it's an instruction set, which means that performance specifications such as "The design would need to be powerful enough to run desktop-level software" are irrelevant. Any reasonable ISA can be implemented at any performance level.

I see two sides to "lasting 50 years"

1) all code written at the start will run at full (new) performance on machines built 50 years later, using their native instruction set, not a "compatibility mode". A more relaxed version is that all application programs will work, but OSes may need to be modified.

2) new programs written in 50 years will find it practical to run on the ISA. In general this means having enough address space, and being able to add new instructions for new tasks not envisioned 50 years earlier (which of course old programs won't use but that's ok).

The only ISA sin that can't be fixed is too few address bits so if you were starting now you'd certainly use 64 bit addresses. It seems certain 64 bits is enough for 50 years from the early 90s when MIPS and Alpha went 64 bit, but it seems much less certain if you start counting from now -- systems will have to grow at less than one bit every two years.

IBM 360 is almost 60 years old, Intel x86 is almost 45 years old, and ARM is over 35 years old.

None of them can run unmodified old programs in their current native execution mode.

Both IBM 360 and ARM started with 24 bit addresses in a 32 bit register, with the upper 8 bits ignored and able to be used for other purposes -- and in fact used for other purposes by common software. ARM put the condition codes there!

Both have also moved from 32 bit to 64 bit in the last 20 years (z/Architecture and Aarch64) with ARM's in particular being completely incompatible with their 32 bit ISA. It's not easy even to translate assembly language from 32 bit to 64 bit ARM, or to adapt a compiler -- a completely new back end is required.

While ARM currently has CPU cores that run both 32 bit and 64 bit code, more and more systems are being made which run Aarch64 only -- no 32 bit compatibility. This includes iPhones since the iPhone 8, the M1 Macs, Cavium/Marvell ThunderX server chips, the Fujitsu A64FX supercomputer chip. This is only going to increase.

This is why I think compatibility mode doesn't count.

x86 has perhaps stayed a bit more compatible, but not enough to run unmodified programs. The 80386 kept the same instructions and instruction encoding but changed the default operand and address size from 16 to 32 bits. To get the same effect as an 8086 instruction a prefix byte has to be added to the instruction. In contrast, on a 64 bit x86 the default is still to use 32 bit operands and a prefix byte is used to select 64 bit operand -- so this is more compatible. But there are still multiple reasons that a 386 program won't run on amd64, including dropping the single-byte INC and DEC instructions (to make room for REX) and hard-wiring CS, SS, DS, and ES segment registers to 0.

DEC Alpha was designed with an aim to be relevant for 25 years. It would have easily made that -- and perhaps 50 years -- if the company hadn't died.

The main technical problem with Alpha is that program code size is very large -- only Itanium is worse. This was thought in the 90s to not be a problem because code was starting to be a small part of overall application size and you can just put in a larger L1 cache. However now we are seeing that larger L1 caches limit the clock rate, and also code fetch bandwidth has become a major part of total energy use especially in mobile devices.

So, a 50 year ISA should have compact code.

AMD64 and ARM64 have very similar program code size -- and a lot better than original 32 bit ARM, MIPS, SPARC, PowerPC, Alpha, or Itanium -- but considerably inferior to ARM's Thumb2 (32 bit only) and RISC-V (both 32 bit and 64 bit).

Of everything currently available now, I think RISC-V RV64 has the best chance of being relevant and used in 50 years because:

- 64 bit addresses might just be enough

- there is a promise that existing opcodes will never be removed or change meaning

- there is a lot of room left to add future instructions

- compact code size

- it's not owned by a company that might lose interest, deliberately try to obsolete it, or go out of business

RV128 would be a safer bet. It's planned-for with 128 bit opcodes reserved in the encoding scheme, but not fully finalised yet. No one has built an RV128 chip yet, but Fabrice Bellard has had a javascript RV128 emulator online for several years and it would be fairly trivial for anyone who wants one to create an RV128 core for FPGAs.

128 bits is enough to address every atom in 16 billion tonnes of silicon -- a cube almost 2 km on a side.

It seems unlikely that a computer's internal RAM will ever exceed that.