HACKER Q&A
📣 palata

How would software look if hardware had stopped improving long ago?


Progress in hardware allows software to build computationally costly abstractions to make devs more productive (for better or worse). While some use-cases definitely required improvements (e.g. deep learning became a thing thanks to powerful hardware), it is not clear that all software is better now that "RAM/CPU is cheap". For instance, IRC predates the World Wide Web, but desktop apps like Slack/Discord (that get shipped with a whole browser with ElectronJS) solve the exact same problem. Many websites are excessively heavy, "because they can", I guess mostly to "look better", and maybe to some extent because of ads/trackers. Modern developer tools require to download half the Internet regularly (e.g. node projects often come with 30 000 dependencies, apps get shipped with a whole rootfs in containers, or with an embedded chromium browser, etc). There is a ton of overhead everywhere, sometimes just because we can, sometimes because it's faster to market.

So I was wondering: how do you think software would look today if hardware had stopped improving, say 10 or 20 years ago? Surely we would have end-to-end messaging apps and social networks, but maybe video-on-demand (e.g. Netflix) would be more limited, and we would not have TikTok? Maybe we would not need 5G to connect our fridges to our shoes? We would not have cryptocurrencies or NFTs. But that does not mean we would have stopped developing software.

Or maybe said differently: what are "useless/cosmetic" improvements that happened in the last 10 or 20 years in software, and what are "real" ones?


  👤 Comevius Accepted Answer ✓
Hardware improvement slowed down considerably in the last 10 years. A Core i7-3970X from 2012 is still comparable to a typical notebook processor, or the fastest smartphone chips today, including the one in the Meta Quest Pro that was just announced.

But imagine the same happening between 1992 and 2002. 10 years there meant fundamental incompatibility. A 486 computer with 8 MB RAM and VGA graphics card was already a paper weight in 2002. No Windows XP and Warcraft 3 for you. A Core i7-3970X with a GeForce GTX 680 can still play the latest games at 1080p, including many DirectX 12 games, except for the ones that now require AVX2.

But to answer your question if we would have been stuck with 1992 technology the internet would have evolved differently, and mainframes would play a much bigger role, to the point that your desktop computer would be just a thin client, running the latest amazing software accelerated by mainframe computers. You would submit jobs from your computer, the mainframe would calculate it and get back to you.


👤 jerojero
Just want to say that Discord and Slack do not solve the exact same problem as IRC. And here lies one of the most difficult aspects of protocols versus programs, protocols are much slower to adapt. Discord allows for a ton of functionalities that are not possible natively through IRC and it is a mistake to dismiss them.

On the other hand, discord and slack offer pretty much the same things that we already had with MSN messenger all the way back in 2008. Voice messages, image sharing, custom emotes, group chats, etc, etc. Although design lines have changed; client wise, I'm pretty sure you can make a discord clone that runs pretty similar in 2008 hardware. So it's not like I don't agree with the point you make here.

Processing and streaming video is, as you say, costly. And better hardware has helped a lot in this regard. Now a days it's possible to stream videogames. But this power that is now available to everyone is misused by most. Websites are bloated and they are dozens of megabytes on a lot of crap that doesn't really offer any tangible benefit to the end user. A lot of applications are slow and big, I think a lot of this problem comes from the culture fostered in the dev area where it's more important to ship new features than good features. In reality, managers care more about having lots of features shipped than for a developer to take a few days on just one feature so it performs well. Developers capable of doing both fast and good deployments are rare and this shouldn't be a surprise to anyone.

We should be spending much more time building performant tools that are easy to use, so that the heavy lifting is done where it needs to be. The idea of Electron is great, but the execution of it is bad; this is imo where the work should be done.


👤 syntheweave
Almost nothing would be different at a glance. All the major trends that took place, with very little exception(VR, for example, still pushes hardware; and ML-based AI likewise), were already possible with mid-2000's hardware, just at lower fidelity, with less generous unit economics on the backend, and with less power efficiency on portable devices. This may have moved the attempts to run such services back by a few years, but I can't imagine that Twitter, for example, would be substantially different. I fully believe we would still end up with smartphones even if they had to run a much more stripped-down OS. You would still get attempts to make smart gadgets and freezer-fridge displays, because these gimmicks have been part of the economic backdrop ever since we started selling disposable razors and ballpoint pens.

More market pressure to reinvest in the inner software stack would have taken place, a move which actually would have commoditized out incumbent platform monopolies earlier(Microsoft et al.) - there would be less of a web focus and more of a systems/apps one.

There would still be an unhealthy adtech/surveillance sector, but it would consolidate under the thumb of high-capital players more readily. This would actually encourage more P2P-style solutions to appear in response. You would still have proof-of-work algorithms floating around searching for applications in this space; Bitcoin was always possible, the computing effort just reflects what proportion of physical capital is being deployed to secure transactions.

The obvious main downside would be with more extravagant uses of ML. Apps that depend on it as the core tech wouldn't pencil out. But some version of "algorithm-mediated" would still be possible.


👤 zetamax
I don't think it would look that much different, in many ways it would just be less people using it, and for those that have it, they would be getting less out of it. If the prices had remained flat, even less productivity would have resulted. Consider how little a microwave oven, washing washing machine, or refrigerator has really changed, and how (relatively speaking) society questions that lack of change.

If we had been stuck in a cd-rom, dvd-rom, 56k world, I suppose we would would have seen quite a bit of improvement in user interface, graphic design, and market integration of those tools. However, I think we still in many ways will see micro-renaissances in areas with a similar vibe. If you look at a map of cell phone coverage in America, it's basically 1994 most places on the map, and those areas basically go far underdeveloped for what a middle class person could take advantage of in a post covid-19 America. In that sense, I think your overall question is more an invitation of what the potential of those areas is, rather than a "what if" alternative universe.


👤 david927
I think you've said it all here. No substantial improvements and, if anything, it has set us back by allowing for even more gluttonous behavior.

👤 quickthrower2
> We would not have cryptocurrencies or NFTs

Yes we would - that doesn't need much hardware at all. Maybe 1998 hardware (Pentium II type thing) at most? Bearing in mind security algorithms used would be scaled down anyway to suit the lower hardware in the hypothetical scenario, and hash rate would of course be lower.


👤 muzani
I think apps of today would be gone. We'd probably peak with Palm OS and similar. There would probably still be some apps and game devs for mobile devices. Someone would find a way to make D&D or roller coaster tycoon for a Palm Pilot. We'd be playing MUDs on our phones instead of MMOs; not necessarily a bad thing.

Twitter might be in the form of typing out a SMS to a certain number, and it would all be plaintext.

I think the main effect would be in developing countries. Currently, the poor buy an iPhone instead of a TV and a car, and the phone gives them job security and entertainment. They'd probably be working different kinds of jobs - demand for office work like clerks might be 8x or so, considering that you can't automate things like government processes.


👤 tluyben2
You can see from the demo or retro computing scene how far it can get if software devs keep optimizing every cycle and keep finding new tricks. One of those things is symbos[0], a cooperative tasking OS for computers that had a 3mhz cpu and were made mid 80s.

If you had shown this mid 80s, people (even people that programmed professionally for these computers), would've told you that you were crazy.

And there are many more example that, given enough time and effort, a lot more can be achieved with the same hardware but better software/compiler/algorithms and undocumented behaviour.

[0] http://www.symbos.de/


👤 Yaa101
We would have most of what is now, as most of what we have now as concepts was programmed long ago.

Things would have been leaner, to the point and with less force on the CPU.

There are some good things better possible due to more powerful CPU, for instance virtual machines, even though that concept is also a old one originating from IBM mainframes, as are relational databases and sql.

Also programming languages became better and have better and more complete stdlibs.

I guess most contemporary user software we have today are slow, ugly and even irrelevant variants of better older versions.

Developer software does somewhat better as it does not need pretty pictures.


👤 dajonker
Programming languages would be affected greatly. A language like Go would be more popular because of performance of both the compiler and the output of that compiler.

Rust would have a much harder time because its compilation is computationally very heavy and would take ages. Similarly C++ might be much slower to adopt new features because the compilers would just get too slow.

Dynamic languages like Python and Ruby may spend more effort on performance and JIT compilation rather than new features.

Or perhaps those hardware limitations would spark more innovation and breakthroughs.


👤 dusted
Cooler and more "cyberpunk".

I often imagine how cool it'd be to "operate the computer" if we were generally stuck with 80x25 columns, maybe a separate "bitmapped" CRT for viewing graphical content.

I do enjoy the multimedia aspects of everyday computing immensely, and I'd miss my flacs and h265s..

But as for the software itself, excluding video games.. It'd not mind it.


👤 buriburi
I think the software will be better than now, because developers would have had more time to study the specifications of the hardware. The hardware now change too much fast with new drivers, new firmwares, etc. That force the developer to search a solution to fix quickly a problem. But, is this solution the best or more accurate? Well, we never know it because there is another hardware improvement that forces to change the last fix... or worse, change totally a software function with another quick fix. I have in mind the old commodore 64 for example.... You have a good game like Bruce Lee (platform adventure), and Antiriad could be its evolution... Same cpu, ram, etc., but developers learn how to make the most of it. Or maybe I'm too old for this crap. XD

👤 conductr
I think it looks the same and just becomes less profitable. Design trends propagate slower because iteration is slower.

Most AI/high powered stuff has not really added much value to the average end users so they wouldn’t have missed it. It would be a toy for geeks which is kind of what it is now.


👤 smakmur
Not an expert but video games do look substantially different towards the end of the console lifespan.

👤 didgetmaster
One of the reasons why software is so bloated is because the developers are constantly using it on their high-end machines. When a customer complains that something is slow on their 7-year-old laptop, the developer often just shrugs and says 'works fine for me'.

I routinely run and test my new data management system on old, outdated hardware to insure that the speed is as acceptable as possible even when there are resource constraints. If a big database or file system query runs fast on old hardware, then when someone uses a high-end machine they will be truly amazed.


👤 THENATHE
As an aside, craigslist is the pinnacle of web design in my eyes, and this concept made me think of how I kinda wish the world would go back to wildly function yet kinda ugly websites over them modern bloated but aesthetic websites we have now.

👤 everythingabili
It would remind me of the work of the hundred rabbits.

https://100r.co/site/philosophy.html


👤 fatneckbeardz
My theory is everything would look alot more like Rust and Go.

IMHO it boils down to a single issue. Net time saved through automation. Consider an equation, for Task X with Software Y that is built to run on the hottest new machine coming out in the next 18 month cycle.

Told = Time task X will take without software Y

Tnew = Time task X will take with software Y + upgraded CPU

Tgross = gross time saved by installing Software Y + upgraded CPU (this is calculated as Told - Tnew)

Twaste = Time wasted through wonky installation, bugs, maintenance, build system problems, dependency hell of software Y + cpu upgrade

Tnet = Tgross - Twaste --> this is the only reason people use software. Does it, on the whole, save time?

So if a task X used to take 1000 hours, but with software Y on an upgraded CPU it takes 500 hours, but it requires 50 hours of waste, thats 500 gross hours saved, but 450 net hours saved.

https://smoothspan.files.wordpress.com/2007/09/clockspeeds.j...

When CPUs were speeding up by 2x every few years, then this equation was dominated by Tgross. Twaste was very small in comparison so users would tolerate alot of Twaste. Twaste was basically ignored for 20+ years.

Nowdays, CPUs are not speeding up much at all. Tgross is struggling to be anything significant. In fact sometimes, Tgross is -lower- than Twaste. So overall it doesn't even make sense to upgrade the software and CPU. So now the only thing in that equation we can improve upon anymore is Twaste.

That means nowdays, we want software that is easy to install, does not have dependency problems, does not have build problems, does not have memory bugs, or security flaws, and doesnt require tons of maintenance.

Go and Rust both improve on all those aspects versus older languages.

"what about parallelism" -> as you may know, slapping N CPUs on a board does not boost speed by N times. And there is an enormous amount of Twaste in the debugging since complexity of thread interaction has gone up by N something. With single thread CPU, Twaste remains constant. With a N-Cpu system, Twaste goes up in some proportion to N. So to get any savings you still need to attack Twaste, and that is done by intrinsic parallelism features built into the language, which both Rust and Go have to various extents.

caveat i have no idea if this is right i just made it up.

edit - this model also explains the Thin Client theory we are seeing which Comevus described below. Web is basically Thin Client theory finally successful after 40 years of PC domination. Web = lower Twaste - almost no dependency, no installation problems, no maintenance on user side.

this also explains containers. Containers try to attack Twaste for systems built on languages made for a Tgross world, like python and C and javascript.

another edit -> does this mean PC is dead forever? no. i forgot one major component of Twaste, which is dealing with bureaucratic waste of people trying to control what other people are doing on a computer. This form of Twaste dominated a lot of the old pre-PC days of computing, like monopolization, discrimination, racism, sexism, favoritism, nepotism, corruption, and all the other waste that humans insert into a process because , contrary to some popular opinions, a lot of human beings make their living off Twaste, not off Tgross.

So at some point the bureaucratic nonsense makes Twaste so large that even a small Tgross is better. So PC is not dead by this theory. Never fully dead anyways.

The end