So I was wondering: how do you think software would look today if hardware had stopped improving, say 10 or 20 years ago? Surely we would have end-to-end messaging apps and social networks, but maybe video-on-demand (e.g. Netflix) would be more limited, and we would not have TikTok? Maybe we would not need 5G to connect our fridges to our shoes? We would not have cryptocurrencies or NFTs. But that does not mean we would have stopped developing software.
Or maybe said differently: what are "useless/cosmetic" improvements that happened in the last 10 or 20 years in software, and what are "real" ones?
But imagine the same happening between 1992 and 2002. 10 years there meant fundamental incompatibility. A 486 computer with 8 MB RAM and VGA graphics card was already a paper weight in 2002. No Windows XP and Warcraft 3 for you. A Core i7-3970X with a GeForce GTX 680 can still play the latest games at 1080p, including many DirectX 12 games, except for the ones that now require AVX2.
But to answer your question if we would have been stuck with 1992 technology the internet would have evolved differently, and mainframes would play a much bigger role, to the point that your desktop computer would be just a thin client, running the latest amazing software accelerated by mainframe computers. You would submit jobs from your computer, the mainframe would calculate it and get back to you.
On the other hand, discord and slack offer pretty much the same things that we already had with MSN messenger all the way back in 2008. Voice messages, image sharing, custom emotes, group chats, etc, etc. Although design lines have changed; client wise, I'm pretty sure you can make a discord clone that runs pretty similar in 2008 hardware. So it's not like I don't agree with the point you make here.
Processing and streaming video is, as you say, costly. And better hardware has helped a lot in this regard. Now a days it's possible to stream videogames. But this power that is now available to everyone is misused by most. Websites are bloated and they are dozens of megabytes on a lot of crap that doesn't really offer any tangible benefit to the end user. A lot of applications are slow and big, I think a lot of this problem comes from the culture fostered in the dev area where it's more important to ship new features than good features. In reality, managers care more about having lots of features shipped than for a developer to take a few days on just one feature so it performs well. Developers capable of doing both fast and good deployments are rare and this shouldn't be a surprise to anyone.
We should be spending much more time building performant tools that are easy to use, so that the heavy lifting is done where it needs to be. The idea of Electron is great, but the execution of it is bad; this is imo where the work should be done.
More market pressure to reinvest in the inner software stack would have taken place, a move which actually would have commoditized out incumbent platform monopolies earlier(Microsoft et al.) - there would be less of a web focus and more of a systems/apps one.
There would still be an unhealthy adtech/surveillance sector, but it would consolidate under the thumb of high-capital players more readily. This would actually encourage more P2P-style solutions to appear in response. You would still have proof-of-work algorithms floating around searching for applications in this space; Bitcoin was always possible, the computing effort just reflects what proportion of physical capital is being deployed to secure transactions.
The obvious main downside would be with more extravagant uses of ML. Apps that depend on it as the core tech wouldn't pencil out. But some version of "algorithm-mediated" would still be possible.
If we had been stuck in a cd-rom, dvd-rom, 56k world, I suppose we would would have seen quite a bit of improvement in user interface, graphic design, and market integration of those tools. However, I think we still in many ways will see micro-renaissances in areas with a similar vibe. If you look at a map of cell phone coverage in America, it's basically 1994 most places on the map, and those areas basically go far underdeveloped for what a middle class person could take advantage of in a post covid-19 America. In that sense, I think your overall question is more an invitation of what the potential of those areas is, rather than a "what if" alternative universe.
Yes we would - that doesn't need much hardware at all. Maybe 1998 hardware (Pentium II type thing) at most? Bearing in mind security algorithms used would be scaled down anyway to suit the lower hardware in the hypothetical scenario, and hash rate would of course be lower.
Twitter might be in the form of typing out a SMS to a certain number, and it would all be plaintext.
I think the main effect would be in developing countries. Currently, the poor buy an iPhone instead of a TV and a car, and the phone gives them job security and entertainment. They'd probably be working different kinds of jobs - demand for office work like clerks might be 8x or so, considering that you can't automate things like government processes.
If you had shown this mid 80s, people (even people that programmed professionally for these computers), would've told you that you were crazy.
And there are many more example that, given enough time and effort, a lot more can be achieved with the same hardware but better software/compiler/algorithms and undocumented behaviour.
Things would have been leaner, to the point and with less force on the CPU.
There are some good things better possible due to more powerful CPU, for instance virtual machines, even though that concept is also a old one originating from IBM mainframes, as are relational databases and sql.
Also programming languages became better and have better and more complete stdlibs.
I guess most contemporary user software we have today are slow, ugly and even irrelevant variants of better older versions.
Developer software does somewhat better as it does not need pretty pictures.
Rust would have a much harder time because its compilation is computationally very heavy and would take ages. Similarly C++ might be much slower to adopt new features because the compilers would just get too slow.
Dynamic languages like Python and Ruby may spend more effort on performance and JIT compilation rather than new features.
Or perhaps those hardware limitations would spark more innovation and breakthroughs.
I often imagine how cool it'd be to "operate the computer" if we were generally stuck with 80x25 columns, maybe a separate "bitmapped" CRT for viewing graphical content.
I do enjoy the multimedia aspects of everyday computing immensely, and I'd miss my flacs and h265s..
But as for the software itself, excluding video games.. It'd not mind it.
Most AI/high powered stuff has not really added much value to the average end users so they wouldn’t have missed it. It would be a toy for geeks which is kind of what it is now.
I routinely run and test my new data management system on old, outdated hardware to insure that the speed is as acceptable as possible even when there are resource constraints. If a big database or file system query runs fast on old hardware, then when someone uses a high-end machine they will be truly amazed.
IMHO it boils down to a single issue. Net time saved through automation. Consider an equation, for Task X with Software Y that is built to run on the hottest new machine coming out in the next 18 month cycle.
Told = Time task X will take without software Y
Tnew = Time task X will take with software Y + upgraded CPU
Tgross = gross time saved by installing Software Y + upgraded CPU (this is calculated as Told - Tnew)
Twaste = Time wasted through wonky installation, bugs, maintenance, build system problems, dependency hell of software Y + cpu upgrade
Tnet = Tgross - Twaste --> this is the only reason people use software. Does it, on the whole, save time?
So if a task X used to take 1000 hours, but with software Y on an upgraded CPU it takes 500 hours, but it requires 50 hours of waste, thats 500 gross hours saved, but 450 net hours saved.
https://smoothspan.files.wordpress.com/2007/09/clockspeeds.j...
When CPUs were speeding up by 2x every few years, then this equation was dominated by Tgross. Twaste was very small in comparison so users would tolerate alot of Twaste. Twaste was basically ignored for 20+ years.
Nowdays, CPUs are not speeding up much at all. Tgross is struggling to be anything significant. In fact sometimes, Tgross is -lower- than Twaste. So overall it doesn't even make sense to upgrade the software and CPU. So now the only thing in that equation we can improve upon anymore is Twaste.
That means nowdays, we want software that is easy to install, does not have dependency problems, does not have build problems, does not have memory bugs, or security flaws, and doesnt require tons of maintenance.
Go and Rust both improve on all those aspects versus older languages.
"what about parallelism" -> as you may know, slapping N CPUs on a board does not boost speed by N times. And there is an enormous amount of Twaste in the debugging since complexity of thread interaction has gone up by N something. With single thread CPU, Twaste remains constant. With a N-Cpu system, Twaste goes up in some proportion to N. So to get any savings you still need to attack Twaste, and that is done by intrinsic parallelism features built into the language, which both Rust and Go have to various extents.
caveat i have no idea if this is right i just made it up.
edit - this model also explains the Thin Client theory we are seeing which Comevus described below. Web is basically Thin Client theory finally successful after 40 years of PC domination. Web = lower Twaste - almost no dependency, no installation problems, no maintenance on user side.
this also explains containers. Containers try to attack Twaste for systems built on languages made for a Tgross world, like python and C and javascript.
another edit -> does this mean PC is dead forever? no. i forgot one major component of Twaste, which is dealing with bureaucratic waste of people trying to control what other people are doing on a computer. This form of Twaste dominated a lot of the old pre-PC days of computing, like monopolization, discrimination, racism, sexism, favoritism, nepotism, corruption, and all the other waste that humans insert into a process because , contrary to some popular opinions, a lot of human beings make their living off Twaste, not off Tgross.
So at some point the bureaucratic nonsense makes Twaste so large that even a small Tgross is better. So PC is not dead by this theory. Never fully dead anyways.
The end