The components used in commodity servers utilize components that not only have a chance to fail but there are data sheets that predict when each component will fail based on the applied voltages, currents and temperatures. There are/were programs that could predict how long a device would last based on the component specs but I forgot what it was called. Something like OrCAD I believe. No idea if that still exists. I could not afford the license at the time 1990's.
Some people may have anecdotes of servers with 30+ year uptimes and that is certainly a thing, but that is entirely luck of the batch and luck of the environment and not in any way predictable. And this is just hardware. This doesn't even touch on aspects of BIOS/Firmware/Driver/Operating system fault handling and stability.
I used to pride myself on long uptimes. Then I realised that you have to reboot the system anyway if you want to upgrade the kernel or system version.
That same thing applies to hardware. Can you still buy floppy disks, for instance? Maybe somewhere, yes. But not like they were available in every corner supermarket 20 years ago. CDs and DVDs, likewise. And what happens if you have all your backups on floppies (say) and you need to find a replacement floppy-drive just-right-now?
It might be possible to build a server that will last 20 years, but would you want to stake your business or your life on it?
Since you haven’t set any performance requirements… if you are happy with using a RTOS instead of something like Linux, you could look at super basic parts, no electrolytes drying out in solid state capacitors.
I’d be pretty confident a high end Arduino style Atmel/AVR based “computer” could be easily designed to last 20 years or even longer. Add some redundancy and dip switches or electronic fuses for failover to the replacement parts, as rare as such failures would be in older style designs.
Because we understand so much more about how the underlying mechanisms work, out of necessity in order to develop more advanced designs, but understanding these things means we could also develop older style things more reliably.
Servers that have survived 10 to 15 years are quite common, 20 a lot less so.
Probably the easiest way would be to buy 2 "fairly good" identical servers, run one of them, and keep the other one off for spare parts. By fairly good I mean a branded server (not entry point) with dual power supplies and redundant disks.
Alternatively, you can wait 7 years and buy spare parts then. Unless things change dramatically, it will be easy then to find parts for a 7 year old server, and, if used, cheap.
Keep its software and configuration practices up to date, upgrade and replace components. Servers that have lasted 10-15 years have rarely done so with at least some maintenance, and the less performed, the accumulated legacy of being out of date is more likely to sink it completely.