It seems common to see people speak with certainty about any given software engineering practice, idiom, or concept. This isn't just people new to the field, enamored with a technology. In fact, it can be prominent with experienced folks. Yet if you pull on a thread or two, there's no verifiable underlying support for the views. Good intuition? yes. Scientific validity? rarely.
I'm sure you can think of examples. Just look to the last few pull requests or design discussions you've observed.
My current working idea: Most practices are subjective and often contrary to one another. Subsequently, what's more important is choosing one of the broad system/software classes and being consistent about how it's applied.
All rules and idioms are rules of thumb, useful for making quick decisions. All of the rules have exceptions. You are becoming a senior developer.
If you're talking to someone who thinks you should always write tests, always DRY, everything should be OOP/Functional, then you're talking to a junior Dev.
It's like any list of rules (e.g. rules for slidedecks) -- great for beginners, perhaps good for consistent team standards. However, once you understand why the rules exist, you also know when to break them.
So any practice/idiom that doesn’t carefully spell out the context in which it works should be ignored or at least treated with caution.
And any developer who makes claims about X being “always best practice” or “always best solution” are basically self-identifying as inexperienced/arrogant/not-smart/treat with caution.
Unlike other fields of engineering (or medicine, or hairdressing, ...) there is no meaningful certification or regulation of the field.
Circa 2005 you had people who would do it the Microsoft way or the highway who thought open source software was a joke. There are still people today who think you're an idiot because you have anything to do with Microsoft.
Frequently one hears bombastic comments about programming languages. For instance, "Java drools, Haskell rules".
Many people think that there is some practice that applied dogmatically gets great results. There are many ways that people resolve that contradiction, one of the most interesting is widespread "normalization of deviance".
It's not unusual at all to join a new team and get told that "we write tests for everything", "we always do code reviews", and have the engineering manager describe a number of practices that are used consistently. When you look at the code you see that test coverage is maybe 5%, the practices that are allegedly being used are not being used, and come to the conclusion that we must not be doing code reviews otherwise we'd be doing what we say we do.
They are, considered in context of the rest of the desktop personal computer industry, nearly identical.
But we had a mass-extinction event in the 1990s and most people under 50 haven't seen anything that is actually _different_.
So, for instance, people think that C is some sort of universal assembler and everything else is built on C. This is not even adjacent to something similar to the truth: it reflects the blinkers of the 21st century PC world.