HACKER Q&A
📣 miduil

Is software becoming less efficient?


I had a discussion with a friend where they stated that in-efficient software is bad because it ultimately yields to human labor-exploitation in order to fulfill our demand of ever faster compute power.

I think I agree with the issue, but also I believe things like vendors are able to get away with only few years of software support, proprietary OEM drivers that eventually become EOL and give hardware an expiration date - things that aren't exactly because of less efficiency.

So this kinda brings me to my question, is software actually becoming less efficient?

We have better image/video/audio codes, better multi-core programming languages, better efficiency in various high-level programming languages, (sometimes) better optimized libraries, better tooling that allow development of more efficient software and also pre-trained ML technology that uses much less storage/compute than some custom crafted software would have allowed.


  👤 thrwawy1234 Accepted Answer ✓
Yes and no. As you said, in some areas we have wonderfully efficient codes that can take advantage of hardware for things like codecs. Efficiency through high level languages - that's debatable. Often they trade developer efficiency for efficiency of the computer - layers of abstractions to hide platform details, runtime layers to defer some things to runtime, etc.

I rarely see people using new tools to develop code that itself is more efficient - based on the ever growing resource needs of programs that rarely have corresponding growth in capabilities, I think almost all of the effort from the developer community as been on making developers feel more efficient at the cost of compute resources.

Personally I'd love to see people focus on efficiency even if it takes more time and effort. Unfortunately, the incentive just isn't there - developers focus on what make it possible for them grind out new code faster to the largest audience possible. Hence the layers upon layers of runtimes and abstractions that make that possible. If it means burning CPU and memory, they really don't seem to care. Hence the relentless consumption of more and more resources by software.

I'm jaded : performance analysis was a long term research area for me, so I pay a bit more attention to these kinds of issues than your average JS or Python jockey who thinks the computer is a magical container full of infinite RAM and compute resources that are theirs and theirs alone to consume.


👤 hyperman1
I've read somewhere: When people move to their work, they have a fixed time budget, not a fixed distance budget. If they have access to faster transport, they just find jobs further from home.

Software efficiency seems the same: People have some fixed tolerance of time and buggyness for a task. When hardware gets faster and more stable, people just tolerate more bloat and bugs. Companies only fix this when complaints get bad enough, so the incentive to do better just disappears.

This sort of fixed ratio while the tech below changes is everywhere. My original PC had a 200mb hard disk, and win31 took 20mb, so 10%. . My work laptop has a 512GB SSD, and Win10 took about 50GB, also 10% . Win31 was heavily optimized to fit in RAM and disk, Win10 is a bloated slug. It will stay a bloated slug until some shortage or whatever causes Microsoft to clean up its mess. This already happened e.g. with the EEE Pc that couldn't run Win Vista. So they kept XP alive and optimized Win7 some more.

Enterprise has the same limits. A company had a mainframe with 1Mb RAM. End of year batches ran in about 3 days because they had to. Today 5 decades later, the COBOL code has more than 10 000 times as much RAM, and still end of year closure takes 2 or 3 days. If it would be slower, they'd optimize or they can't fulfill obligations. But faster makes no sence, they are closed at end of year for 3 days. So optimization happens when requirements force it, and bloat happens when it gets a chance to grow unchecked.

All of this is also why we should not worry too much about governement forcing companies to become more eco or repair friendly. The company will scream it is impossible, and when forced it will remove some bloat and carry on.


👤 bruce511
In programming, as in life, we balance requirements, optimize for some things, and compromise on others.

I will assume that when you say "effeciency" you mean "clock cycles, and ram required, to perform a process".

So let's start by saying that probably all code could be improved to go faster and/or use less ram. Given enough time most things can be "improved".

But there's a price to be paid. Most code starts as "easy to read", but least performant. Performance is improved, usually at the cost of readability, until its "fast enough".

Readability impacts future maintainability, code thats hard to read may contain bugs (especially for edge cases) and may introduce security flaws.

I've worked on libraries, making them highly performant, but the code inevitably becomes more opaque. For a library the trade off is worth it.

For that sales report, that runs once a month, and takes 10 minutes, but could likely easily be optimised to run in 2 minutes, the trade off is less obvious. Keeping the report easy-to-maintain is a valuable long-term benefit.

Computers are not the only clock-cycles to measure. Developers also have limited time, and time spent doing one thing is time not spent elsewhere. Sure I can spend my time making something happen in 1 tenth of a second, instead of 2,but if the difference is not human perceptible, what's the point?

Incidentally this question is usually paired with "is software more bloated", and it is, primarily because there are more users who want to do more things. Hard drive space is cheap. Making programs "small" is very expensive.

So yeah, compromises. In time, space and money.


👤 PartiallyTyped
> software actually becoming less efficient?

I'd argue that software is becoming worse in some ways. It seems that the acceptable tolerance is _very_ low these days, with most people are just shrugging issues off.

Personally, I find this talk by Jonathan Blow to be rather inspiring:

https://www.youtube.com/watch?v=pW-SOdj4Kkk

As for software becoming less efficient specifically, I'd say that it is, for example, relying on microservices for everything is probably not a good idea for most companies. Not everyone is AWS, nor do they need to be.


👤 dragonsky
What I’d like to know is how much energy is burnt by corporate mandated software running on workstations.

My work computer regularly spends hours heating my room running god knows what scanning and inventory software when I’m not using it for anything. Unless of course one of the desktop people have managed to deploy coin mining software across the fleet. At least then someone is getting benefit beyond room heating.


👤 sacnoradhq
Gas expands to fill the volume of the container -> Tragedy of the Commons.

Software developers get lazier, stop profiling, and aren't incentivized to produce efficient code... we're usually incentivized to produce code that works and stop at that.


👤 rolenthedeep
The tools we have are generally quite efficient. When someone puts thought and care into building software, it can be efficient.

There's two problems: computers today have such a staggering amount of resources that being mindful of performance is no longer baked into the programming mindset. Then it simply isn't profitable to spend an extra month taking your program from "usable" to "performant".

More programmers need to try building something on a tiny AVR with 4kbit of RAM. It's more fun than you'd think.


👤 didgetmaster
Software often gets used for something besides its intended purpose. Someone writes a simple library to solve a simple problem. Efficiency is not an issue since the problem is simple and the library is rarely used. The code is slow but it works so no one cares.

But because the code actually works, the library gets shared with other programmers. Eventually someone uses it for something that has a lot of data and is run frequently. Now the inefficient code becomes a real problem. Multiply that with a dozen libraries with similar characteristics and you begin to understand the issue.


👤 bboreham
“Software is getting slower more rapidly than hardware is becoming faster” is Wirth’s Law, coined 25 years ago.

When I started programming, the biggest problem was the 8K of RAM I had available. This year one of my programs crashed after exhausting 500GB.

The programs do more: they crunch more data, paint more pixels, and animate the display in more whimsical ways. But there’s little to force coders to be more efficient, so they spend their time on other things.


👤 devdude1337
A large amount of modern is software is bloated. Most websites are filled with dozens of tracking an marketing (anti-) features, Drm in games, ORMs and query-languages that require another parsing and resolving layer in backends, let alone network latency, and so on… Software today does much more than in the past, yet most of what it does is worthless for the user.

👤 eternityforest
Nope, not in any way that is subjectively noticable. I just started using Google keep and the snappiness is amazing. Android 12 seems slightly faster than 11.

We are finally optimizing again.

Unfortunately some stuff still lags behind, but it's not like the industry forgot how to write fast software.


👤 jenkstom
Throwing out paradigms because they are complex (but flexible) in favor of paradigms that are simplistic and easier to understand. A good example is Object Relational Models.

👤 paphillips
There are a few things to parse here.

1. In terms of software efficiency, engineers may lament the perceived waste, inefficiency, and imperfection in the produced code, but from a business standpoint it is a rational cost/benefit decision. It is useful to view software through the lens of economics. In economics there is a concept that Labor (e.g. a software engineer) and Capital (e.g. servers, infrastructure) are substitutable. Many sub-optimal programs and systems built with reduced labor cost are perfectly usable by substituting more hardware. Optimization only makes sense where there is a clear benefit that exceeds the cost.

Thus, as a contrived or extreme example, would a manager spend $200k in labor to produce a highly optimized program, hand-crafted in assembly, or spend $500 in labor to produce a program in a higher level language such as Java, that does the same thing but uses more compute resources? The spread in cost between those two choices allows one to throw a lot of hardware at the sub-optimal program. Thus it is frequently a better business decision to produce inefficient software and throw more hardware at it. It may make the engineer feel bad, but what they wish to optimize is not aligned with what the business wishes or needs to optimize.

2. In terms of the short 'shelf-life' of software, the same problem infects hardware, consumer electronics, and other products. I've purchased a number of IPads for my family over the years. After a few years and IOS versions, more and more apps stop being compatible, until it becomes effectively useless even though the hardware is the same as when I bought it.

Again let's view this through the lens of economics. A cynic will look at the IPad situation and and think 'What better way to separate me from my money than to force me to buy a new product every few years, solely by software shenanigans?' Of course businesses enjoy selling more product, but they also have cost constraints in order to be viable (ignoring those who are perhaps making 'obscene profits' before competitors take notice, as my econ professor used to shout so passionately).

We might consider as an alternative that it is simply too expensive to maintain many versions of an app, on multiple platforms, with backwards compatibility and security concerns. The business instead is making a rational decision to only support their application on the OS versions and platforms that the majority of their customers are running at any point in time, similar to how a web developer at some point has to stop bothering to ensure their site works in IE 5.0.

None of that reduces my frustration at planned obsolescence but maybe this is just the reality of things.

3. I'm on the fence about the labor-exploitation part: this seems like a different and very complex issue. Some may argue that more hardware manufacturing provides good jobs without extended education or training requirements, while others may argue that those jobs are exploitative because the working conditions are poorly regulated or the position does not pay enough by their standard.

At a macro level, global poverty levels have significantly decreased over the past 30 years [1], so humanity seems to be doing something right. An optimist may say that as regions of the world move out of poverty, the regulatory environment will inevitably follow to reduce abuse, pollution, and safety risks. Time will tell but it requires patience - human systems are slow to change, in contrast to software and hardware.

[1] https://blogs.worldbank.org/opendata/april-2022-global-pover...