HACKER Q&A
📣 timhrothgar

Why is software quality always decreasing?


I have worked in tech for about 14 years at companies big and small. I've worked in startups, big tech, consultancies, and I've been a freelancer. The one thing that's been pretty consistent is excessive technical complexity (aka tech debt). Probably < 10% of codebases I've seen used proper abstractions and commonly accepted software engineering best practices. The exceptions to this are newer codebases (< ~3 years old), smaller codebases (< ~10,000 LOC), and smaller development teams (< 10 contributors).

I've pondered this problem quite a bit. I initially perceived it as the result of engineers compromising in the face of business pressure or engineers making mistakes due to lack of experience or foresight. But as I've become more experienced (and worked as a manager), it seems like a tech business problem. Tech businesses face undesirable situations that result in low quality code as a side effect. Examples include critical employee departures, hyper growth, critical customer demands, and even changes in government regulatory requirements.

Can this be avoided for codebases that are old and large? Does anyone know of examples of codebases (public or private) that have maintained a high quality codebase that is large, old, or supported by a large number of contributors? If so, how is it done?


  👤 jandrewrogers Accepted Answer ✓
Software quality has been increasing for as long as I've been in the business. However, the complexity, scale, and defect surface of software has been increasing at least as quickly. We've invested the systemic gains in quality to expand the capabilities of what software can reasonably do instead of polishing the software we wrote 20+ years ago.

This was the right choice in most cases. The software from a few decades ago was inferior in almost every way to the software we have now for solving the problems we need to solve today. Most software does not live long enough to be "high quality", or it lives so long that its original design assumptions become obsolete and therefore less useful.


👤 PragmaticPulp
I don’t really agree with your premise. Software quality wasn’t pristine a few decades ago. It wasn’t hard to find messy codebases, apps that barely worked, and developers who simply fumbled their way through code until something compiled.

If anything, it feels like common softest quality has trended upwards as software engineering learning materials have become more widely and freely available across the internet and we have so many more open-source projects to learn from.

> The one thing that's been pretty consistent is excessive technical complexity (aka tech debt).

Technical complexity and tech debt are two entirely different things. Do you think it’s possible that you’re simply missing the “good old days” when computers did less, Software expectations were lower, and it was possible for a tiny team to completely understand and operate a useful software package by themselves?


👤 gizmo
The quantity of programmers has exploded. Programming was a lot harder and more frustrating before stackoverflow and other online resources. If you wanted to learn to program you had to struggle your way through. And those who did would end up being pretty decent programmers.

Nowadays there is no such filter. Anybody can get something to barely work by copy-pasting from stackoverflow. This isn’t a negative by itself, it just means that we now have many professional programmers who never had to try hard. And quality goes down as a result.

Code quality tends to be the worst in areas with very low barriers to entry (web stuff and such) and very high in domains where you need to be a good engineer in order to get anything done at all.


👤 phendrenad2
Software quality is suffering because we don't have software problems anymore. You can't make a new operating system, or make a new compiler, and make money. Those areas have been filled, and, software is evergreen so you can just port it to new hardware.

What we have are social problems, people problems, "disruption" to existing industries, etc. This is software "eating the world" (because it ran out of software problems to eat). This is why every startup pitch is "We're disrupting the commercial real estate loan industry" or "we're disrupting the mine subsidence claims industry". Just absolutely insane hail-mary startups trying desperately to find some niche that hasn't been invaded by software already.


👤 wonderwonder
If I was to guess, I think a lot of it has to do with the transitory nature of so many software jobs coupled with investor pressure to deliver and get ROI. Engineers are switching jobs every couple of years so projects have a constant in stream of new engineers and a constant out stream of tribal knowledge. Very few people are given the proper time to spin up and really learn the codebase, they are given a brief walk through and then expected to start closing tickets. This is how short cuts and not following best habits sneak in. The business side also has immense pressure to start making money so this pressure impacts the software delivery schedule forcing more features into each sprint. Worked at a company where the schedule was always intense but a ton of bugs were making it to production so we decided to introduce unit testing. We all attempted to start learning the new unit testing approach and started on it, but then the pressure from above to deliver features ramped up and the unit tests turned into useless stubs. Couple months later they just ceased to exist. Common response will be to push back and say no but that's not so easy to do in practice when you have to deliver your tickets for the sprint or get pipped. While it may not be so extreme everywhere I bet some aspect of this pressure exists in most places. If not, please let me know where and I will apply :)

This is all my anecdotal experience, I could be totally wrong in the grand scheme of things.


👤 davidw
Economics. People are ok with 'lower quality' software if it has more features and is delivered faster, by and large.

I read a wonderful article that went into this in some depth a while back but can't for the life of me recall where.

Basically, if you're doing some kind of NASA mars rover software, you go over it again and again and are really careful and all that costs a lot of money. It also means you have fewer features and it all takes longer. If you tried to use that sort of process on some banal bit of everyday software, it'd be way more expensive than the competition and have fewer features. You'd go out of business.

I also agree with the other commenters that quality hasn't really declined over the years.


👤 streetcat1
Reasons:

1. The software creation process moved from designing software to "growing" software. With iteration time moving from months to weeks. So there is no conceptual integrity.

2. MVP turning to final products. Basically quick and dirty code become the foundation of the architecture, with no time to refactor.

3. Short tenure time. I think that avg tenure time for young developers is less than a year. Hence, the knowledge of the code/ domain / abstraction is lost.

4. Market value speed over quality. Software managers are compensated for delivery and not for quality.


👤 PaulHoule
Software systems were simpler back in the day.

Steve Wozniak 'coded' Breakout in 45 chips on a circuit board, then coded it in assembly and coded it again in BASIC to prove somebody could.

There was a limit to how big of a program you could fit in an Apple ][.

A modern game could have a development budget more than a Hollywood movie and fill a whole Blu-Ray disc.

From one perspective it is a miracle of progress that the modern game works at all.


👤 majkinetor
Its always decreasing because entropy builds up on longer projects. Its simply the fact of life, not particularly specific to coding. Nothing to do about it. You can delay it somewhat but not too much.

The best you can do is to have automatic tests, lots of them, and make them work as intended (good tests are very hard to make). Those make refactoring possible and make specific quality guaranties.


👤 wanderr
I agree that it’s a business problem. Cost, prioritization, rapid hiring, promotion incentives all play a part. Nobody usually funds a project to go through and clean up code. My last two jobs before I went fully down the management track, I deleted 100k-200k lines of code by applying static analysis, deleting obviously dead code, finding obviously broken code and figuring out whether it was being used (thankfully it mostly wasn’t, in the few cases where it was I could replace it with calls to properly working methods that already existed). I consistently deleted more code than I ever wrote, but every time I deleted code it was on a personal whim, not because it was a funded activity, even though it also improved productivity (engineers wasting time maintaining dead code). Once a codebase reaches a certain size, it’s not realistic for someone to get comfortable enough with the entire codebase and how everything interacts to be confident that major changes won’t have unintended consequences, so even if someone wants to go rogue and do a major cleanup effort, they will only get so far.

Now I try to make sure my team has time allocated to paying down technical debt, but most of the time that is enough to keep the situation from getting worse, more than really improving the situation.


👤 spenczar5
I think there is a strong survivorship bias effect. One feature of good code is that it is easy to modify; bad code is impenetrable.

But this means bad code changes more rarely and more slowly, because it is hard. So the good code gets most of the modifications. Now, even if most of the time the modifications maintain quality, they certainly sometimes turn good code into bad code.

This is a system that gradually chews up your good code and turns it bad, until you have a big nasty mess you throw out, and start over again.


👤 bob1029
I disagree with the assertion that quality is always decreasing.

For me, this is a game of perceptions where crappy software becomes popular and profit motive makes it risky to attempt iterations. No executive working at Microsoft is interested in the liability that would come with a win32 rewrite of Teams, even if the UX would triple in quality overnight.

If anything, the quality of most software should be dramatically better than ever before, especially when you factor in the relevant tools & ecosystems.


👤 ajkjk
Lots of reasons mentioned here.

One big thing I think is overlooked is: the kind of people who start projects that succeed tend to be good engineers, and the kinds of people who jump onto them later tend to be less good. Not bad, per se, but just not as remarkable. Usually these people are plenty good at the business's needs: shipping features, fixing bugs. But not at the kind of holistic, visionary, motivated work required to unfuck a massive project.

IMO people who are mediocre programmers (which I kinda count myself among? I'm trying to be better but it's hard to keep the motivation up) don't understand how much better at this the really good engineers are because they're almost never exposed to them. You don't write code alongside the best programmers at your random corporate job, because the best programmers don't work there (or if they do, they usually aren't writing much code). The senior engineers are senior because they're adequate programmers and excellent shippers of products. Etc.


👤 analog31
My mom taught programming at a community college in the 80s, and I graduated from high school in '82. Based on the experiences of her students after taking her course, I would have qualified for a programming job, having learned BASIC and Pascal fairly thoroughly. I don't know if the same level of proficiency (assuming a more modern language choice) would get me a job today.

Thankfully the Shadow IT Department has lower standards. ;-)


👤 aayala
Sadly most companies assume develop software is more important than maintaining, this can be reflected in salaries from software developers compare to operations teams

👤 nonameiguess
I have seen one old, large codebase that remained tremendously high quality. It was the algorithm suite for ground processing of electro-optical spy satellite collections for the US. By far the cleanest, best-organized, best-tested, and most reliable software suite I've ever had the pleasure of working on. The only downside was very long build times, upwards of two hours when I got there, but we eventually got it down to 45 minutes. All written in C++ and tiny bits of Fortran with the test and simulation tooling written in Ruby and Perl.

A few factors that I think contributed to the high quality:

- Waterfall development. It may be wrong for fast-moving commercial applications with fickle user bases, but we were targeting hardware capabilities we knew in exact detail years in advance of them launching into orbit and becoming usable, so taking the time to write out detailed specifications and requirements and tailoring verification testing to these structured the development. No chaos. Everything had a clear purpose with a really obvious way of telling whether what you did was right or wrong.

- Dedicated testing teams independent of the dev teams. When their entire job is to find bugs and break stuff, it makes a difference.

- Very little library use. This wasn't really a "build versus buy" decision per se so much as the program had existed for such a long time, that at the time we solved most problems, we were legitimately the first to do it, but since the code base was entirely classified, we couldn't release our own work as libraries. The upside to this is all of our work was dedicated to actually writing and testing code, with very little work dedicated to managing dependencies, and the developers understood what everything was doing, including low-level functionality like memory and thread pools, filesystem drivers, and scientific functions like coordinate transformation, ground to orbit projection. Nothing was a black box. We understood our system because we wrote it.

- Continuity. The lead technical people from research scientists who developed all the algorithms to software architects that came up with data structures and class hierarchies, were often 30+ year veterans that had never worked on anything else. They were the world's foremost experts in what we were doing, so they were good at it.

- Effectively no corporate level management interference in what we were doing. The work was all classified and the top suits aren't cleared to know what we're doing anyway. We're either costing more than the contract pays or we're not and that's all the really matters. They can't micromanage if they can't even get into your building.


👤 throwhauser
There aren't that many old, large-scale codebases in use that are failures. If most of these are not following what you consider to be best practices, maybe those practices aren't as advantageous as you assume.

👤 pbalau
> Does anyone know of examples of codebases (public or private) that have maintained a high quality codebase that is large, old, or supported by a large number of contributors?

FreeBSD, OpenBSD, the Linux kernel


👤 cies
Mainstream went from C, to C++, to Java, to JavaScript.

If you have written GUIs in C/Win, C++/Qt, Java/whatever, and JS/React (or Vue) you should have an idea of what went wrong. That's just GUIs.

Bad code is always going to happen, when you make stuff will you regret some choices. That's a given. But at what cost you can refactor your way out depends a lot on the language you have used.

Now there are languages that make it harder to create a mess that is hard to refactor yourself out of. They are languages to, you could say, optimize for refactoring by using strong typing. Rust, Haskell, Elm, OCaml/ReasonML/Rescript, Kotlin to some extend.

But non of them are mainstream.

The IDE assistance I got with refactoring C++/Qt in 2001 is still miles ahead with what I get with JS/React (or Vue) 2022.

I recently did a Elm app on an automatically generated GraphQL API on a PG db with Hasura. I could auto generate typesafe bindings to the GraphQL schema in Elm. This was the first time I felt C++/Qt (or Ruby/Rails) kind of next-level powerful again. Type safety from db schema, through the API, to my frontend/UX code in strongly typed Elm.

So I think it is improving, but not so much in mainstream languages.


👤 nlfire
In my first job, I remember I had a boss once who said to me, we don't really get bug reports anymore for our component, despite the fact that they are still pouring in for the product. There had been a half year long initiative to improve software quality for my team, supported by the broader organization. It didn't save the company though. But we reached a point where we fixed most/all of the bugs. We added automated tests for corner cases. So things got really quiet. Hard to believe. As an engineer, it was incredibly satisfying.

That phase of my career was very rare. Yes, I have gotten periods of time where I get to pay down technical debt, but mostly the bosses/employers just want me and my colleagues to move on as fast as possible to start the next project. They don't care how many bugs are filed against the old project, and we'll just squeeze in the critical ones.

The go-go-go attitude is what wears me down and makes me want out of the industry. I want to feel like I finished something. Not perfection, but finished you know? That there isn't a mountain of bugs I never even looked at?

I don't think this is something new however.


👤 arnvald
Besides the reasons mentioned by others, I want to add: discipline.

Keeping codebase well maintained requires some effort and often it's very tempting to cut a corner here or there, or to focus on adding new features instead of keeping dependencies up to date. Initially these shortcuts don't have a big impact, but at some point you notice it all became messy, and now it's hard to bring it back to shape.


👤 smoldesu
I think it's a matter of expectations. We assume that software can grow with infinite vertical complexity given enough time and effort, but simultaneously we make enormous efforts to dumb-down our interfaces and make things more user-friendly. You're not wrong if you think that old Unix machines were more versatile than a brand-new laptop, if not only because the command line forced you to think like a programmer to get stuff done.

Nowadays, people don't care. The 90s ruined us with it's impulse economy, society as a whole felt as though we were entitled to just the good parts and nothing more. As a result, software got developed that way. Fine Corinthian leather, Gaussian-frosted glass and lickable scrollbars won out against dependable, powerful software interfaces. Society doesn't want good software, they just want to feel good. People can leverage that desire to make a lot of money by selling mostly-satisfying software. Stay hungry, stay foolish?


👤 moonchrome
>proper abstractions and commonly accepted software engineering best practices

Put 10 senior developers in the room, ask them a question about those practices. You'll probably get 8 variations on a mainstream approach and 2 alternative approaches.

As I'm often switching around stacks I'm shocked how lacking automated tooling is at enforcing basic formatting guidelines in some languages, IMO JavaScript prettify is the best here - it offers so practically no room for subjective tweaks.

For example I'm currently working in C# and it's so much worse in this regard, formatters between IDEs aren't even compatible.

So if something as trivial as formatting is this hard what do you think enforcing style guides and other quality metrics takes ?

In a small young team you can have one or more people steering the ship but when they move on or add more new people even in the best case these guys are trying to guess what the previous team would do to maintain style


👤 tcgv
Entropy. By adding new features to a system it's complexity can only stay the same or increase. Complexity can only go down if we take time to refactor the codebase. Adding new features is cheap and quick, refactoring is usually expensive and slow, hence software entropy, which somewhat relates to quality, usually goes up and up.

👤 alkonaut
For users (and hence product owners, managers etc) the most important quality is features. That the product does what they want. So long as users would rather have software that does 20 things poorly than 10 things well, then that's what they'll get. Because the software with twice the functionality will cast a wider net and catch or keep more business.

The reason you see the buggy and bloated software is survivor bias I think. The software you see is the software that survived long enough to become bad. That programs grow old and bloated is a testament to that someone uses them. This was always the case too. Software was definitely not any better historically.


👤 marcinzm
>Tech businesses face undesirable situations that result in low quality code as a side effect. Examples include critical employee departures, hyper growth, critical customer demands, and even changes in government regulatory requirements.

Go look at the business teams of any company and you will find a mountain of excel floating around. Magic excel that does magic things that no one understands anymore. Processes that make no sense but are ossified in place over decades. And so on.

This is a problem of having many people working together without an actual unified goal and nothing to do with tech. Large corporations are inefficient and slow. They are called dinosaurs for a reason.


👤 tester756
>Can this be avoided for codebases that are old and large? Does anyone know of examples of codebases (public or private) that have maintained a high quality codebase that is large, old, or supported by a large number of contributors? If so, how is it done?

The thing is

if you took 3 x 10 years of experience engineer - one in web app backends, one heavy FP programmer and 3rd one in e.g kernel programming, then they'd have different definitions of "good code"

for web apps it's probably a lot of abstractions/indirections, DDD, patterns, heavy OOP, testability

for kernel code there's nothing wrong with gotos, ugly hacks for performance, 10 meters of ifs, stuff like likely/unlikely


👤 Beltiras
I was revisiting a codebase I hadn't seen in 6 years and I gasped aloud at how pretty it was. I thought at the time: "Why don't I write beautiful code like this anymore?" and I think I have an answer (for my case). I excel at visiting these 10 year old plus hulks and doing a refactoring of the shortcuts and the bad solutions. Problem is that this rarely surfaces in my own current work which is more greenfields than I am comfortable with.

I'd love for some more contract work in this vein but I am unsure how to find it.


👤 shime
I think software quality is always decreasing because of multiple factors.

1) The tension between leadership that usually only prioritizes shipping new features and engineering that wants to write sustainable code. There is usually no incentive for engineering to decrease technical debt or increase sustainability, as these changes are usually not tangible to leadership. The only short-term beneficiaries of one engineer's code quality improvements are other engineers, which often don't care that much, as they are busy shipping new features as quickly as possible.

2) Speaking of short-term, not all leadership thinks long-term. Sustainable code makes sense if you're thinking long-term, but not if leadership is chasing lucrative exits in a couple of months. If a project survives for 20 years, chances are it survived multiple leadership changes, which all thought short-term.

3) In software, the only constant is change. As project gets older, it has to deal with all the changes that aging entails. Aging means increased exposure to reality and all the "surprising amount of detail"[1] it contains. This increases complexity, introduces bugs, adds edge cases to deal with, etc.

[1] http://johnsalvatier.org/blog/2017/reality-has-a-surprising-...


👤 pietromenna
You already got the main reason in your text: " engineers compromising in the face of business pressure or engineers making mistakes due to lack of experience or foresight"

Those two factors are a reality now a days: people leave teams when they get experience and go to a place where they have 0 experience (knowledge is lot) and they built everything under pressure to deliver to meet market demands (so we rush to deliver features).

I would also tell you that in the past well organized projects were exception not the rule.


👤 sershe
One reason is people changing jobs more ;) On large, old codebases, it basically results in lava flow development. The difference with the regular code rot that could be prevented/reversed with refactoring, more time spent on code quality, best practices etc. is that it's self-reinforcing and perpetuating, with metaphorical layers of code rot on top protecting those lower and shifting the trade-off between cleaning up and adding one more even further towards the latter ;) Old layers solidify and new ones flow on top of them (layers being metaphorical, not "architectural")

In a large complex system, say something written lovingly with best practices by a guy who has left 3 years ago, needs to be modified. The new dev doesn't have a good mental model of this code, or maybe has no model because it's the first time anyone still on the team is looking at it in depth. Even with best intentions, they tend to work "around" the old code with less modifications than the original developer would have made to properly integrate the change; instead they put stuff on the "outside"... aside from additional code rot/clunkiness, this also results in fragile dependencies that are even harder to untangle and clean up, and perpetuates the cycle.


👤 Joeri
Codebases are sensitive to entropy, they degrade over time as they are changed, and it takes active effort to push back on technical debt. On a young project there is not much technical debt. On a small project you can have a quality release where you pay off most of the technical debt all at once, which is doable. On a large and old codebase the only way to push back against accreting technical debt is continuously and incrementally, reserving a slice of the budget in every cycle specifically for quality work. Most organizations do not do this, because that time rather gets spent on features instead. And that’s why as codebases get older and larger they degrade. Because they become unpleasant to work in, the older and more experienced developers leave for greener pastures. The more of the core team leaves, the bigger incentive to leave. Eventually the project is mostly staffed with people of lesser skill or lesser tenure (contractors passing through, juniors) and the practices start degrading as well.

And this is where you come in, on an old ugly codebase with perennial quality problems and bad development practices, stuck in a hole and trying to dig its way out. Eventually it will get replaced by a fresh new project by a fresh new team, who just know that this time it will be different.


👤 karmakaze
A lot of this has to do with the 'softness' of software. In the past, software was developed, shrink-wrapped and sold in physical media from warehouses and store shelves. The cost of bugs and errors was much higher and could result in software failing to gain adoption, or costly updates having to be prepared and physically shipped on floppies or CD-ROMs.

Video games is the most recent to undergo this transition. Up until not too long ago, physical DVD or Bluray media was the standard distribution mechanism. There were downloadable patches available post launch, but their sizes were originally somewhat limited to the write storage capacity. Now digital distribution with broadband connections is very common.

The softest of all are cloud software sold as a service. Any bug can be fixed for all users by updating the servers with a CI/CD deployment in sometimes minutes. With this increased softness, the product is always in development and there will always be new areas that have some usability issues as well as stable parts that have been worked out. But at any given time, there could be a fixed/large number of bugs if you were to count every one. Look in the reported issues for a popular repo that you thought was stable software--most of the time, most issues don't affect your use cases.

Long story short, it's what other comments say: it's economics and optimization. Of course there's also bad development and release policies which amounts to poor management or engineering culture, but I can't say that this is in excess to what's economical in my experience, except perhaps in the rare cases I ran away from.


👤 duped
I think what it comes down to is that code quality isn't a core value prop for businesses, at least not directly. When you consider that the single greatest risk to projects are engineering expenses (meaning engineer time) then sacrificing code quality for quicker turnaround is an easy decision.

I think your experience has some profound survivorship bias too. The shitty codebases last longer, which should be incredibly telling about the value of code quality to a business.


👤 bryanrasmussen
I've been mainly working in web development since 2009 (before that I did it sometimes but also lots of standards work and data handling), so in that field I have seen a lot of commonly accepted software engineering best practices in use, but those commonly accepted software engineering best practices were not necessarily the commonly accepted software engineering best practices of the modern day, but rather of the day that the code was written.


👤 HellDunkel
I have worked as a software dev for about 10 years before i turned towards freelancing and slowly gravitated away from programming. I felt very much like you and tended to be rather annoyed by what and how things were done. Much later i was lucky to be able to work on a software project with just one other developer doing no coding myself but at all the things that i thought were not getting enough attention in previous projects. mainly: testing and design. Turns out this is the most exciting, appealing and usefull piece of software i ever contributed to and although i wrote very little code i am super proud of it. The codebase is quite ok, not too big (150k), not too clean. But the fact that this thing just WORKS and is FAST is extremely rewarding. So for me the best thing to do was step aside and try approach things from a different angle.

👤 ppeetteerr
The cost of keeping software up to date (e.g. latest practices) increases exponentially as the size of the software grows. When you have a company with over 100 contributors, and/or with many years of code, the value of maintaining some ugly bits becomes excessively high but the value of upgrading that software is not always quantifiable.

A rule of thumb is that you'll always have at least 2x or more code than engineers to maintain it. When setting priorities, you have to be pragmatic about which code to update, and which features to build.

That's probably why you're seeing so much old code.

By the way, as a freelancer, you may also be hired to maintain the ugly bits of a system. Full time employees generally work on building value, whereas freelancers are hired to take care of things that people generally don't want to hire full time employees to do.


👤 nickm12
I only have my narrow view of the industry (a handful of teams across 4 different companies) but I disagree with the premise. It seems to me that software quality is on average better than when I started out in the industry in the mid-2000s (and at school before that). In that time, I've seen version control, code reviews, automated testing, and reproducible builds all go from novel tools we were dipping our toes in to industry standard practices.

That said, I think it's only slightly better, and not as good as it should be given our tools and (collective) experience. I chalk this up to the reasons other people have given—mainly the business of software doesn't favor clean code and the majority of code is written by new developers who are still developing their skills.


👤 feim_2022
It is a solvable problem, but most company incentives don't line up to it. We accept good-enough software. And so there are few people who know how to do this. And due to that it continues to take time and due to that the companies don't think it is worth solving and the cycle continues.

Some software companies realize that quality is important for their business. And they do this right.

System software providers - do this well. Examples are Amazon's S3 Store, leading relational Databases, and even embedded software such as Arista EOS that Arista runs in their networking gear --do have very high quality.

The reason we don't see it all around us is that most well written software is invisible to us. We don't really think about the iOS software on our phones -crash rate is extremely low.


👤 menotyou
Reasons: Overuse of OOP, Devops, CD, Microservices, Java, Javascript, nodejs, github, webserivces etc.

Every of these things bring multiple tools, frameworks, design patterns, convention and standards, which are all hyped for a short time until everything breaks down under the additional layer of unmanagable complexity until the next hyped toolset is introduced promising to solve all underlying problem, but finally adding another next layer of complexity.

Later you can't can find anyone who can maintain codebase developed using a framework hyped five years ago.

Example for large, old, good, maintainable codebase suported by a large number of developers in the world: ABAP (Sorry, no github, no open source. And sorry again: Imperative language).

And the silver medal goes to............................: SQL.


👤 tmp_anon_22
Your ability to understand how bad software is has improved too. The ambition of software has increased in this same time. The industry is as a whole probably doing things better, for example - less ridiculous crunch periods at the organizations I've worked at.

👤 jimmont
This is like asking why does entropy exist. Because that's the way it works. Add economic incentives that don't correlate with maintaining quality and that pattern is accelerated. Look at healthcare in the States for an example. Or Apple software quality for shining examples of expansive software efforts. I've sat having a beer with folks working on major projects we use every day and when I mention their product has a problem they defend it--really couldn't care less about the truth or their customer. Or maybe I'm just missing something. Silicon Valley is a brand. And software development is an arts and crafts effort--not engineering. This is a generalized comment.

👤 behnamoh
Maybe absurdly high salaries for SW engineers and opportunities to rise to the top made developers less inclined to stick to their piece of code, abandoning it for the next person to maintain, only to get corrupted by the same salaries and PM opportunities.

👤 BatteryMountain
Honestly, in my case, every single place and project I've work at/on has been a trash fire. I've done my best to simplify and keep things as simple as possible but there is always one developer that you know if he touches the code, the mess will linger for 6 months.

At this point I'm considering only joining startups with greenfield projects so I can have more power to keep things simple. Joining projects that have been around for a while has become tiresome. It's always the same kind of messes too and they are always abandoned. I wish developers would stop building things and then running away. It's your mess, your name is on it, clean it up before you leave.


👤 cryptica
I noticed the same thing. Much of the over-complicated tooling and libraries which are being accepted in the industry today without any resistance would have triggered furious debates if they had been introduced 10 years earlier.

There is a technology mono-culture, heavy in dogma and censorship.

I suspect that is what happens when prominent developers who happen to be working for a financially successful company like Facebook, Google or Microsoft end up promoting and heavily pushing their favorite tools onto everyone down the hierarchy and company boundaries (as if they were silver bullets) and censoring nuanced, rational discussions in the process.


👤 rmk
Software "Engineering" is not really an Engineering discipline, such as Civil Engineering, where there are norms, standards, accepted practices, regulations, and a body of knowledge that hasn't changed much in hundreds of years. Or compare it to Mechanical Engineering, which is similar.

In comparison, Software is a very young field, and is not regulated at all. Open any EULA and you will see that you are indemnifying the vendor from any sort of liability to the maximum extent possible. Further, software is only loosely limited by physics: almost anything that you can imagine can be built using software, so software creations are incredibly complex, and are all man-made, with extensive combinatorial inputs that simply can't be tested or are not feasible to test economically. Also consider that fact that software is almost always purpose-built and not comparable to a mass-produced artifact such as an IC engine, or a girder, or a length of rebar, or a chemical reagent. So the methods of Industrial Engineering, which concern themselves with quality control in the face of uncertainty do not neatly apply to the software world. The rate of change of software artifacts is just mind-boggling: once a bolt is produced, that's it. It doesn't need to be continuously maintained, updated, or improved, because it simply doesn't possess the malleability of code. If you fasten it with the right amount of torque, you are probably good for an indefinite length of time and can expect to simply forget about it. The software equivalents of nuts and bolts, such as compilers, are nowhere near as simple as nuts and bolts: they are whole universes in their own right, and every detail of that universe is man-made, conceptualized from scratch.

I could go on, but given that software is so complex, it is to be expected that it doesn't hold up to your notions of quality.

If you want to look at large codebases that have maintained a high quality, the obvious recommendation is the Linux Kernel. Another is the Go Standard Library, which is very readable and well-written. I think musl, a minimal libc replacement, is also considered very well-written (but perhaps it's not a huge codebase). Postgres source is also widely known to be pretty good for such a complex codebase that is worked on by a large group.


👤 softwaredoug
Because the amount of software is increasing?

If last decade only seasoned professional bakers baked bread, average bread quality would be amazing.

If this decade, _everyone wants to bake bread_, average bread quality would be pretty bad.


👤 wesapien
Because the information is widely available for making. The barrier to entry has been greatly reduced. That's why I always troll those people who complain about Haskell being the choice of language for the Cardano cryptocurrency. They say it's hard and no developers will use it. They just demonstrated why they shouldn't be writing anything related to finance or other critical infra. There's so much complaining because they see a gold rush and want to jump in without the skill needed.

👤 salawat
The key to a high quality product is simply to be unwilling to compromise.

Look at the Space Shuttle's code. Look at the code and systems to drive anything where any level of assurance must be maintained.

Quality is hard. It is all encompassing, and for all too many optimizing decision makers, it is "The crappiest thing I can sell without looking like a nitwit, or killing somebody, that takes the least money to develop or doing something so blatantly illegal I can't get the lawyers to realistically dig me out.".

The rest is noise.


👤 deathofsocrates
it is highly dependent on the engineering practices applied and the team structure as well. i'd suggest to check the team topologies book by Matthew Skeleton which suggests to form teams according to the software architecture instead of business needs. The "release often and small", "go with microservices", "independent release cycles" and many other things coming from different needs either derived by the competency in market or the buzz words are either not applied properly or the engineering managers are not able to see that the underlying/existing structure requires changes before you go down those roads. I also think the test automation practices are not well adapted/applied generally because the innovations in that area are massive lately. So its rather a principle and angle than it has anything to do with the codebase or the number of people working on a project. Generally i do not agree that it is decreasing but maybe most of those big projects may need more time to adapt the industrial changes or maybe they will vanish in time anyway because of their competitors are eventually do a better job on the long run.

👤 dustingetz
Economics of capitalism – you win big by being first to market and then using your advantage to backfill the product.

In startups this is exaggerated further – you need to double with each capital milestone, or you starve to competitors and die. Grow your valuation faster than you grow your technical debt and you can buy your way out with armies of engineers. Smash your competitors with money or buy them outright.

In enterprise, Microsoft software quality oscillates in waves and this works because of their mortal lock on distribution. MS Teams can get away with being way worse than Slack and Zoom. The result is MS can just push early stage trash on us and get away with it long enough to backfill the quality as the next technology wave crests.

Growth and distribution matters SO much more than tech.


👤 menotyou
What is succesfully on the market is unstable and unfinished software with long list of features no one needs, nice looking but unergonomic user interface, with a software architecture based on the newest hyped buzzwords and to complex to deliver a stable product.

What users need is a stable (no CD/CI) software which has the function set they need to do their business.


👤 strictfp
In my opinion we need to ditch the old idea of hiring more people to speed up feature delivery, and focus more on streamlining the codebase and making it easy to make changes to it.

Adding more manpower works around an inefficient codebase by writing more code on the side, and so we get a growing Rube Goldberg machine, no matter if we do monolith or 4000 microservices.


👤 azth
Part of it is the widespread use of subpar languages for large scale development. Python, javascript, golang, etc. are all very bad languages to use in large scale dev. They may be relatively quick to get things off the ground, but once things become complex, their shortcomings are going to affect development quickly.

👤 rvr_
I think you almost got the answer to your own question. For me, quality software is made by very small teams of very competent people improving the software for a long time. We don't know much about the vast amount of closed software, but, for opensource, the state-of-the-art follow these rules.

👤 supertrope
Businesses are complex and have legacy cruft and have institutional strengths and flaws. Their work product reflects that. Sometimes it runs deeper than any one firm or industry. Just imagine how much easier calendars would be if each month was exactly four weeks long.

👤 duxup
What counts as decreasing quality?

When I started playing with computers an error often meant a hard crash of the whole system.

Now an app dies (rare) and I just restart it and often it comes back up just where I left it.

That’s a very specific example but I certainly prefer it to the past.


👤 drewcoo
"That which can be asserted without evidence, can be dismissed without evidence." - Christopher Hitchens

If you dig for documentation, for actual proof of your claims, I bet the answer(s) will be more obvious.


👤 zqna
Here is the answer, around 70% of engineers just don't care, they are just happy to collect the paycheck and go on with their lives. The other 20% are pushed around by business needs, often lingering on burnout, eventually moving to other companies to try better luck and to reclaim peace no matter how short it would be, and often joining those other 70%, or at least stop taking things too seriously. Then there is the last 10% of quality nazis, who totally disregard business reality, and are the cause of stagnation in the companies, as their keep beautifying and overdesignig without delivering anything when it's needed.

👤 ramesh31
I subscribe to the Groucho Marx theory of social organizations. The fact that I am able to make it in this industry is proof alone that standards have fallen.

👤 jamjamjamjamjam
You mean code quality. Software quality is the quality of the actual product. Software quality is decreasing at an even faster rate, though.

👤 tschellenbach
Companies try to build everything in-house. I'm an engineer, and I'm quilty of this myself, but often we just love building things. If you build too many things you eventually lose focus. Maintenance becomes harder, upgrading software becomes harder, team members eventually leave. Turns out that everyone loves building, but nobody wants to maintain someone else's software, so you throw some duct tape around it.

I think the solution is to build less and reuse more.


👤 galkk
It is not, and looking back without rose-tinted glasses we can see a lot of the bad stuff there too.

👤 svilen_dobrev
almost as in the old joke: - there is constant/limited quantity intellect on earth, but the population is growing...

there is constant/limited ability to grasp things, but the things and complexity that (piece of) software represents, grows..


👤 aaccount
newer tech allows lower skilled developers to build programs. Which in turn leads to more tech to dumb things down even more. Resulting in more inefficient programs overtime.

👤 tored
What is good practice today is bad practice tomorrow.

👤 bluepoint
Because of the second law of thermodynamics.

👤 GhettoComputers
Have you seen this talk? Jonathan Blow – Preventing the Collapse of Civilization https://news.ycombinator.com/item?id=25788317

tl;dr

Software efficiency is traded for time efficiency, you see this less with hardware constraints like embedded systems programming. As hardware capabilities increase, efficiency is less necessary. I see a reversal of this with rust (ripgrep is amazing), web assembly, while making programming easy leads to poorly optimized software like electron wrappers.


👤 riskneutral
I think you mean "code quality" is always decreasing. The more people that touch a codebase, the lower the code quality gets. As soon as you get more than one person involved, the conceptual integrity and code consistency begins to breakdown. Even with a single individual, if they stop working on the code for a long time, then code quality will decrease when they resume development. The very best codebases are the product of a single mind, developed in a short period of time. These are, necessarily, smaller and more recent codebases, since one person can only do so much for so long. A small team can be cohesive enough to effectively share a common understanding of what "conceptual integrity" means for that codebase. As the team grows, organizational communication and coordination problems take over and codebase integrity degrades. The other enemy is time, because eventually the author(s) of any codebase will move on and it will become someone else's task to inherit and maintain that codebase. That is where the largest breaks in code quality will occur, due to the loss of institutional knowledge. It is very difficult for institutions to retain intuitive knowledge such as what the "conceptual integrity of the codebase" means. The problems are exasperated by the fact that code is rarely highly documented, and it can be difficult to express conceptual integrity in code because it is implicit in the code rather than explicit. The problems are exasperated by the fact that code quality is not the most important objective to maximize for most software developers and organizations. There are more important objectives, like delivering bug-free, working software on-time.

Which brings me to my point that "software quality" is something different. "Code quality" is what programmers obsess over because it determines the quality of their lives. But "software quality" is what defines the lives of their users. I would define high quality software as being performant, efficient, bug-free, secure, correct, usable interface, etc. Fortunately, these objectives can be met without necessarily needing very high code quality under the hood. The core banking software keeping track of your bank balances, or the autopilot software on your next plane ride, are probably written in in some terrible legacy of COBOL, FORTRAN, C, C++, etc, and probably have huge amounts of technical debt. But the objective quality of the software from the user side is very good, and once a critical software component has been written and tested, the preference is to not change it (if it isn't broken, don't fix it). As long as your bank balance shows up correctly and your plane doesn't crash, you don't worry about the underlying code quality at all as a user.

So, in summary, the bad news is that you can never prevent code quality degradation. Any large, growing and aging system will inevitably lose conceptual integrity, have poor code quality, will come with a mountain of technical debt, and will get harder and harder to modify and grow over time and scale. The good news, however, is that you can still ensure quality of the product for the end-user by throwing enough people, grind and money at the problem.


👤 sys_64738
Sprints and SCRUMs.

👤 hogrider
Given enough time and resources the engineers will problem solve the quality issues. It's just under capitalism there's no incentive, this has been pretty clear to me since I first entered the industry.

👤 StewardMcOy
In addition to some of the answers already given, I can think of two reasons why a code-base deteriorates over time.

The first is that best practices, even if unanimously agreed upon, don't always survive contact with the user. Users will use software in ways that you didn't intend, and their usage patterns may expose bugs or have deleterious performance impacts on your code.

For example, if you're using a functional core, imperative shell design, you get all of the stability and ease-of-reasoning benefits from immutability within your functional core, but users may need to update part of your data model very frequently that you expected would seldom need to be changed, and the way the code is designed, changing this part of the model triggers a very expensive rebuild of the world. At that point, you're either forced to completely re-architect or come up with a clever hack for this one specific use case.

And re-architecting isn't always a guarantee of success. I once worked on a data warehouse system where the strong ACID properties of the database, along with the way the data was segmented ended up causing server with the specific hardware we were using. It's been decades, so I don't remember the exact issue, but something about issuing specific sequences of reads, seeks, and writes over and over caused and issue with the disk buffer, and when the OS periodically went to read data it needed, it ended up with our app's data instead. It was something that could be solved with a different server, but at the time, there was no budget for it, and a migration to new hardware wouldn't be possible until after the holiday retail season, so we ended up having to store some customers' data in files on disk rather than in a proper database. Then we had to change the schema in response to new requirements, so by the time we migrated to new hardware, reconciling the divergent schemas between the database and the database and the files was a nightmare, and it might never have gotten done without some serious politicking, which still took a couple years.

The second reason is that development environments change over time, and invalidate a lot of the assumptions apps are developed on. It's not just frameworks and libraries, but also languages.

I once worked on a Python 2 web service that did heavy text processing and had to make extensive use of Unicode. Just due to the history of how Unicode and Python developed in parallel, Python 2 had some eccentricities when it came to Unicode support. We understood all of these well and were able to develop a well-tested codebase that abstracted away these issues. Python 3 completely changed how the language handled Unicode, and the result on our project was disastrous. We essentially had to rewrite everything to make it work, but it was so messy we threw it all out and completely redesigned it. I can imagine a lot of companies wouldn't want to make that investment, especially since Python 3's Unicode support, while better, is still quite clunky compared to other languages. It's a hard sell to tell your boss you built something on top of broken Unicode support, and now you want to rebuild it on top of a different broken Unicode implementation. They'll just ask if you'll need to do that again in ten years.


👤 supperburg
Software has a better UI/UX. Software has matured and conformed to the market. The market is full of dumb people who care about UI/UX and nothing else.

👤 devwastaken
Take someone who wants to work in software, watch as they learn little about creating software while getting a compsci degree. Tech companies for decades have refused to fund education yet play CS trivia in hiring.

There's a reason the system is setup this way. It's not an accident, it's intentionally designed to take kids that follow orders, load them with debt and make them work for less - this is how they are able to suppress workers banding together and save big money on salaries. Those that are actually indespensible get the big pay and certainly won't complain.

If we want quality we have to start making the companies pay their share, be willing to remove worker and education visas, and overall lock U.S. companies into the north american and European market. These corporate giants are misbehaving children that we have to put our foot down with.

Follow the money.