HACKER Q&A
📣 yonki

Article on software only increasing complexity?


Few years ago a link to article was posted on HN. The idea in the article was that software can only ever increase complexity of things it does manage, not reduce it. The idea was proven using the example of (possibly) the Australian tax system which got way more complicated since they moved from books to a digital system. Does anyone have a link saved somewhere? I drastically need this article.

Thanks!



👤 tacostakohashi
I haven't seen that article.

The way I see it, there are a few different kinds of software.

An old kind is something like using a spreadsheet instead of an abacus or punch cards, using a word processor instead of a typewriter, email instead of snail mail / fax machine, etc. In this case, you're using computers/software to increase the efficiency of some external, real-world, non-computer thing, and it works pretty well, especially if you have a complicated logistical problem like running a big warehouse, airline, bank, etc.

Another kind is software that talks to other software, like a trading algorithm, exchange, a search engine or spam filter where the inputs to your software are the outputs from some other person's software. In this kind of software, there is never any permanent outcome/result, it's just a never-ending arms race where you write some software that temporarily produces better results, but then the other side figures out a more elaborate way to exploit it and get through the filter or better SEO results or whatever, and then you obfuscate some more / change again, etc.

Unfortunately, more and more software is now in the latter camp :(


👤 woolion
It's not only related to software. Any tool that "makes the task easier" will render the task harder in the long run. Modern agriculture got easier for farmers, until so much productivity became the expected norm that it became an extremely hard job. Communication technology made it easy to keep in contact with loved ones, but now jobs expect people to move very far as if it was nothing. Solving a problem through a technical solution only ups the ante.

I'm also interested in the research you mention, but I think it is a special case of this general human behaviour.


👤 Timpy
I think there's a fair argument to be made that computing is the most complex synthetic thing humans have ever conceived all on our own. If we're talking about making systems in the broadest sense, then any category of problems that we create systems for could conceptually be managed on paper and pencil. Let's say "number of things that could go wrong multiplied by the difficulty a layperson would have in fixing such a thing" is what we mean by complexity, a computer would be more complex than a filing cabinet every single time.

There are trade offs for the complexity though, and well managed complexity could disappear behind the interface of a computer. When this is done well it feels seamless, and when it's done poorly it's painful. So maybe I'm arguing the meaning of complexity isn't 1:1 with the meaning of complicated. At the end of the day "did moving this to a computer make it better?" is the question to answer, and a lot of times the answer is no. QR menus at restaurants is my favorite punching bag for this but any home appliance with bluetooth or wifi is an easy target.


👤 abnercoimbre
Not an article, but still highly recommend Why Can't We Make Simple Software? [0] by Peter van Hardenberg (head of Ink & Switch [1]).

Disclosure: it was given at my tech conference.

[0] https://vimeo.com/780013486

[1] https://inkandswitch.com


👤 sorokod
Possible referring to Lehman's laws [1] e.g.

"Increasing Complexity" — as an E-type system evolves, its complexity increases unless work is done to maintain or reduce it

[1] https://en.m.wikipedia.org/wiki/Lehman%27s_laws_of_software_...


👤 spit2wind
> Berglas's corollary, namely that no amount of automation will have any significant effect on the size or efficiency of a bureaucracy

The article you're seeking demonstrates this one way. You can approach it another way: Amdahl's Law.

Any process that takes time T has two parts: a part which can improve and a part which cannot improve. Let p be the percentage of the program which may improve. Symbolically,

T = part that can improve + part that cannot improve

or

T = pT + (1-p)T

Suppose we can introduce an improvement of factor k. Then the improved process time T' is

T' = pT/k + (1-p)T

or

T' = T[p/k + (1-p)]

The overall speedup S, then, is the ratio of the original time to the improved time.

S = T/T'

or

S = 1/[p/k + (1-p)]

It's so simple to derive, I love it. Say you have a bureaucratic process and you're asked to "automate it". You can plug in the numbers and play with them to get a feel for how much overall improvement you can expect. For example, how would the overall process improve in the (unlikely) case that you provided infinite improvement :)

Bureaucracy is not necessarily, although often synonymous with, "composed of many, many parts." This implies that the "part which can improve" is small relative to the part which cannot improve. Amdahl's Law kicks in and improving those tiny parts have minuscule effects overall. No amount of automation will have any significant effect on the size or efficiency of a bureaucracy.

However, this raises an important philosophical question: if you improve a part, do you replace it? How many parts can you replace in a process before it is no longer the same process?


👤 frfl
It's not what OP is looking for, but there's a relevant video I know of by a well known game-dev, The Thirty Million Line Problem - https://www.youtube.com/watch?v=kZRE7HIO3vk

👤 nier
I’m giving up after spending 10 minutes on Algolia and Google. The closest I can find is this here: https://news.ycombinator.com/item?id=31279481

👤 jzombie
I concur that software typically becomes more complex as it incorporates additional features or is modified by numerous contributors. However, using a government agency's role in developing such software as an example doesn't necessarily prove this complexity. It mainly suggests that government-developed software might not always be efficiently optimized.


👤 lfciv
Bit of a tangent, but I always come back to this read:

https://www.stilldrinking.org/programming-sucks


👤 sys_64738
In 1985 you could run a multi-tasking OS in 256KB of memory and run a word processor and paint program in that. That included application memory, GPU memory, and OS memory. Now to run a word processor and paint program on a PC requires 8GB of memory.

👤 solumunus
The UK government has improved things immeasurably with their digital systems.