Some app owners, who would be like "omg our software is segfaulting", it didn't segfault before before, we need to find track down the the exact last thing that changed that made it segfault, and fix that "root cause" and leave every other dumb thing we are doing exactly the same as it was before incase we break anything, because we should only fix the exact thing that caused the most recent segfaulting incident.
Some other developers, who would be like ok... "why don't you start with fixing all of your compiler warnings", and then probably your app will gradually start working better, because the exact thing that caused it to segfault was probably one of the hundreds of compiler warnings / obvious problems with the code, and it's not really worth obsessing over which exact dumb thing is the problem this time, maybe just stop doing dumb things in general and then there will never be a problem caused by doing something dumb?
In reality, there is some merit to either/both positions... but I have seen lots of extremists, especially in the first camp, who essentially refuse to improve anything unless it has been proven to cause an incident, and never proactively, almost as a matter of principle. Like, if you suggest to them that you see something sub-optimal, here is the PR/patch with test coverage, etc, lets just improve it and take it out of the equation for future issues, they'd be like no, we must focus only on our most recent incident, and unless you can prove that the general improvement would have prevented a previous incident, it has no merit.
Much of it comes from a simplistic / naive / reductive understanding of a "root cause", since for any complex system it's more like the swiss cheese / holes lining up, if any one of several things (some of which include user traffic, other host load, race conditions, etc., etc.) hadn't been true it wouldn't have happened, so it's really useful to want to identify and fix a single "root" cause to the exclusion of other contributors.
Anyway... I am wondering:
* Is there an accepted metaphor / term for describing this tension? I guess it's basically a special kind of "can't see the forest for the trees" / "stuck in the weeds", anything closer?
* As someone pretty much in the second camp... any tips for working constructively with people in the first camp?
- Make them see the truth : Would mean that you'd have to ready to take some blame for their mistakes. Wouldn't recommend this if you aren't the manager.
- Get close with the manager / BECOME THE MANAGER so you can manage these people out
- Start looking for another job
Edit: These people seem to lack Associative Horizon and don't have the tinkerer / hacker spirit.
Profitable is the forest.
The rest, just trees.
Good luck.
That’s something I use personally to rationalize why people prefer super complex error prone approaches as opposed to approaches that are more durable. The reason almost universally comes down to familiarity instead of measurement as qualified by a long list of selective biases.
The best way out of this, that always works for me, is to go lower. That means one or more steps lower in the technology stack than what you are currently using. That could means eliminating a dependency, a more primitive language, or merely rolling back a shallow layer of abstraction.
The reason why this works is because forcing a more primitive approach requires elimination of some convenience. This isn’t a technology problem. It’s a people (discipline) problem, so you have to impose the means to require a more disciplined participation. Yes, this will piss people off, because it’s a people problem, but it will solve the problem and result in a superior product.
It's really "hard" to find the root cause. Stakeholders want approximation for solution first.