I think I agree with the issue, but also I believe things like vendors are able to get away with only few years of software support, proprietary OEM drivers that eventually become EOL and give hardware an expiration date - things that aren't exactly because of less efficiency.
So this kinda brings me to my question, is software actually becoming less efficient?
We have better image/video/audio codes, better multi-core programming languages, better efficiency in various high-level programming languages, (sometimes) better optimized libraries, better tooling that allow development of more efficient software and also pre-trained ML technology that uses much less storage/compute than some custom crafted software would have allowed.
I rarely see people using new tools to develop code that itself is more efficient - based on the ever growing resource needs of programs that rarely have corresponding growth in capabilities, I think almost all of the effort from the developer community as been on making developers feel more efficient at the cost of compute resources.
Personally I'd love to see people focus on efficiency even if it takes more time and effort. Unfortunately, the incentive just isn't there - developers focus on what make it possible for them grind out new code faster to the largest audience possible. Hence the layers upon layers of runtimes and abstractions that make that possible. If it means burning CPU and memory, they really don't seem to care. Hence the relentless consumption of more and more resources by software.
I'm jaded : performance analysis was a long term research area for me, so I pay a bit more attention to these kinds of issues than your average JS or Python jockey who thinks the computer is a magical container full of infinite RAM and compute resources that are theirs and theirs alone to consume.
Software efficiency seems the same: People have some fixed tolerance of time and buggyness for a task. When hardware gets faster and more stable, people just tolerate more bloat and bugs. Companies only fix this when complaints get bad enough, so the incentive to do better just disappears.
This sort of fixed ratio while the tech below changes is everywhere. My original PC had a 200mb hard disk, and win31 took 20mb, so 10%. . My work laptop has a 512GB SSD, and Win10 took about 50GB, also 10% . Win31 was heavily optimized to fit in RAM and disk, Win10 is a bloated slug. It will stay a bloated slug until some shortage or whatever causes Microsoft to clean up its mess. This already happened e.g. with the EEE Pc that couldn't run Win Vista. So they kept XP alive and optimized Win7 some more.
Enterprise has the same limits. A company had a mainframe with 1Mb RAM. End of year batches ran in about 3 days because they had to. Today 5 decades later, the COBOL code has more than 10 000 times as much RAM, and still end of year closure takes 2 or 3 days. If it would be slower, they'd optimize or they can't fulfill obligations. But faster makes no sence, they are closed at end of year for 3 days. So optimization happens when requirements force it, and bloat happens when it gets a chance to grow unchecked.
All of this is also why we should not worry too much about governement forcing companies to become more eco or repair friendly. The company will scream it is impossible, and when forced it will remove some bloat and carry on.
I will assume that when you say "effeciency" you mean "clock cycles, and ram required, to perform a process".
So let's start by saying that probably all code could be improved to go faster and/or use less ram. Given enough time most things can be "improved".
But there's a price to be paid. Most code starts as "easy to read", but least performant. Performance is improved, usually at the cost of readability, until its "fast enough".
Readability impacts future maintainability, code thats hard to read may contain bugs (especially for edge cases) and may introduce security flaws.
I've worked on libraries, making them highly performant, but the code inevitably becomes more opaque. For a library the trade off is worth it.
For that sales report, that runs once a month, and takes 10 minutes, but could likely easily be optimised to run in 2 minutes, the trade off is less obvious. Keeping the report easy-to-maintain is a valuable long-term benefit.
Computers are not the only clock-cycles to measure. Developers also have limited time, and time spent doing one thing is time not spent elsewhere. Sure I can spend my time making something happen in 1 tenth of a second, instead of 2,but if the difference is not human perceptible, what's the point?
Incidentally this question is usually paired with "is software more bloated", and it is, primarily because there are more users who want to do more things. Hard drive space is cheap. Making programs "small" is very expensive.
So yeah, compromises. In time, space and money.
I'd argue that software is becoming worse in some ways. It seems that the acceptable tolerance is _very_ low these days, with most people are just shrugging issues off.
Personally, I find this talk by Jonathan Blow to be rather inspiring:
https://www.youtube.com/watch?v=pW-SOdj4Kkk
As for software becoming less efficient specifically, I'd say that it is, for example, relying on microservices for everything is probably not a good idea for most companies. Not everyone is AWS, nor do they need to be.
My work computer regularly spends hours heating my room running god knows what scanning and inventory software when I’m not using it for anything. Unless of course one of the desktop people have managed to deploy coin mining software across the fleet. At least then someone is getting benefit beyond room heating.
Software developers get lazier, stop profiling, and aren't incentivized to produce efficient code... we're usually incentivized to produce code that works and stop at that.
There's two problems: computers today have such a staggering amount of resources that being mindful of performance is no longer baked into the programming mindset. Then it simply isn't profitable to spend an extra month taking your program from "usable" to "performant".
More programmers need to try building something on a tiny AVR with 4kbit of RAM. It's more fun than you'd think.
But because the code actually works, the library gets shared with other programmers. Eventually someone uses it for something that has a lot of data and is run frequently. Now the inefficient code becomes a real problem. Multiply that with a dozen libraries with similar characteristics and you begin to understand the issue.
When I started programming, the biggest problem was the 8K of RAM I had available. This year one of my programs crashed after exhausting 500GB.
The programs do more: they crunch more data, paint more pixels, and animate the display in more whimsical ways. But there’s little to force coders to be more efficient, so they spend their time on other things.
We are finally optimizing again.
Unfortunately some stuff still lags behind, but it's not like the industry forgot how to write fast software.
1. In terms of software efficiency, engineers may lament the perceived waste, inefficiency, and imperfection in the produced code, but from a business standpoint it is a rational cost/benefit decision. It is useful to view software through the lens of economics. In economics there is a concept that Labor (e.g. a software engineer) and Capital (e.g. servers, infrastructure) are substitutable. Many sub-optimal programs and systems built with reduced labor cost are perfectly usable by substituting more hardware. Optimization only makes sense where there is a clear benefit that exceeds the cost.
Thus, as a contrived or extreme example, would a manager spend $200k in labor to produce a highly optimized program, hand-crafted in assembly, or spend $500 in labor to produce a program in a higher level language such as Java, that does the same thing but uses more compute resources? The spread in cost between those two choices allows one to throw a lot of hardware at the sub-optimal program. Thus it is frequently a better business decision to produce inefficient software and throw more hardware at it. It may make the engineer feel bad, but what they wish to optimize is not aligned with what the business wishes or needs to optimize.
2. In terms of the short 'shelf-life' of software, the same problem infects hardware, consumer electronics, and other products. I've purchased a number of IPads for my family over the years. After a few years and IOS versions, more and more apps stop being compatible, until it becomes effectively useless even though the hardware is the same as when I bought it.
Again let's view this through the lens of economics. A cynic will look at the IPad situation and and think 'What better way to separate me from my money than to force me to buy a new product every few years, solely by software shenanigans?' Of course businesses enjoy selling more product, but they also have cost constraints in order to be viable (ignoring those who are perhaps making 'obscene profits' before competitors take notice, as my econ professor used to shout so passionately).
We might consider as an alternative that it is simply too expensive to maintain many versions of an app, on multiple platforms, with backwards compatibility and security concerns. The business instead is making a rational decision to only support their application on the OS versions and platforms that the majority of their customers are running at any point in time, similar to how a web developer at some point has to stop bothering to ensure their site works in IE 5.0.
None of that reduces my frustration at planned obsolescence but maybe this is just the reality of things.
3. I'm on the fence about the labor-exploitation part: this seems like a different and very complex issue. Some may argue that more hardware manufacturing provides good jobs without extended education or training requirements, while others may argue that those jobs are exploitative because the working conditions are poorly regulated or the position does not pay enough by their standard.
At a macro level, global poverty levels have significantly decreased over the past 30 years [1], so humanity seems to be doing something right. An optimist may say that as regions of the world move out of poverty, the regulatory environment will inevitably follow to reduce abuse, pollution, and safety risks. Time will tell but it requires patience - human systems are slow to change, in contrast to software and hardware.
[1] https://blogs.worldbank.org/opendata/april-2022-global-pover...