Thanks!
The way I see it, there are a few different kinds of software.
An old kind is something like using a spreadsheet instead of an abacus or punch cards, using a word processor instead of a typewriter, email instead of snail mail / fax machine, etc. In this case, you're using computers/software to increase the efficiency of some external, real-world, non-computer thing, and it works pretty well, especially if you have a complicated logistical problem like running a big warehouse, airline, bank, etc.
Another kind is software that talks to other software, like a trading algorithm, exchange, a search engine or spam filter where the inputs to your software are the outputs from some other person's software. In this kind of software, there is never any permanent outcome/result, it's just a never-ending arms race where you write some software that temporarily produces better results, but then the other side figures out a more elaborate way to exploit it and get through the filter or better SEO results or whatever, and then you obfuscate some more / change again, etc.
Unfortunately, more and more software is now in the latter camp :(
I'm also interested in the research you mention, but I think it is a special case of this general human behaviour.
There are trade offs for the complexity though, and well managed complexity could disappear behind the interface of a computer. When this is done well it feels seamless, and when it's done poorly it's painful. So maybe I'm arguing the meaning of complexity isn't 1:1 with the meaning of complicated. At the end of the day "did moving this to a computer make it better?" is the question to answer, and a lot of times the answer is no. QR menus at restaurants is my favorite punching bag for this but any home appliance with bluetooth or wifi is an easy target.
Disclosure: it was given at my tech conference.
"Increasing Complexity" — as an E-type system evolves, its complexity increases unless work is done to maintain or reduce it
[1] https://en.m.wikipedia.org/wiki/Lehman%27s_laws_of_software_...
The article you're seeking demonstrates this one way. You can approach it another way: Amdahl's Law.
Any process that takes time T has two parts: a part which can improve and a part which cannot improve. Let p be the percentage of the program which may improve. Symbolically,
T = part that can improve + part that cannot improve
or
T = pT + (1-p)T
Suppose we can introduce an improvement of factor k. Then the improved process time T' is
T' = pT/k + (1-p)T
or
T' = T[p/k + (1-p)]
The overall speedup S, then, is the ratio of the original time to the improved time.
S = T/T'
or
S = 1/[p/k + (1-p)]
It's so simple to derive, I love it. Say you have a bureaucratic process and you're asked to "automate it". You can plug in the numbers and play with them to get a feel for how much overall improvement you can expect. For example, how would the overall process improve in the (unlikely) case that you provided infinite improvement :)
Bureaucracy is not necessarily, although often synonymous with, "composed of many, many parts." This implies that the "part which can improve" is small relative to the part which cannot improve. Amdahl's Law kicks in and improving those tiny parts have minuscule effects overall. No amount of automation will have any significant effect on the size or efficiency of a bureaucracy.
However, this raises an important philosophical question: if you improve a part, do you replace it? How many parts can you replace in a process before it is no longer the same process?