Now my general principle which distinguishes "good" code from "bad" one, is how good it resembles a model solving given problem. It can be a non-neccessary complexity of code (ex. instead of "if (isFinite(a)) { ... }" writing "if (a.toString() != "Inifinity" && a.toString() != "NaN") { ... }" when "a" is simply a number). And contrary not-enough complexity of code, relying on blind faith (famous XSS vulnerabilities, SQL injections in web apps are examples of that).
Code that relies alot on pointless abstractions (aka. bad "signal to noise ratio").
Code with lots of microdependencies to perform really trivial tasks.
Code with premature "optimizations" (usually done before actual performance measurements).
Code sacrificing readability (and thus maintainability) to be overly clever or to do trivial optimizations (usually with little to no impact on overall performance).
And of course all the different ways to mess up comments, style and versioning (Yes, I consider the versioning part of the code):
* Uncommented code
* Over-Commented code
* Comment-Banners
* Commented-Out code (git exists for a reason)
* Inconsistent naming
* entirely_too_long_variable_names for things that exist for 20 LOC
* Unhelpful commit messages (my favorite: "fixed: some issues")
* Bulk-Commits that change/fix/introduce multiple features or issues at once
* Missing tags
* No semantic versioning
If each class does one thing, anyone will eventually figure out how it’s doing it. If it’s doing multiple things, all bets are off. It’s also the exact type of code that will break something unrelated when changed.
Mind you, we don’t need classes like DictionaryConstructor and StringParameterParser, but you should - in about one sentence - be able to say what each class’ purview is.
i also see a lot of code where people effectively throw 30 responsibilities into the same class. Really difficult to update. strictly enforcing single responsibility solves a lot of problems.
As a great (but terrible) example, a project at my first job was a real-time system and the dev for it (I was in testing back then) had decided that since he only had 100 ms slots per task, but had the ability to use up to 10 tasks (or whatever the limit was), he'd distribute some logic across all 10 tasks, rather than making one task responsible for that logic entirely. So the system could have had (and ended up having) a reasonable dataflow, DAG structure relating the tasks (input task, fan-out to processing tasks, fan-in to output task), but instead had a much more complicated internal structure than necessary. It was incomprehensible because of its incoherence. Some of the processing tasks were 2-deep or had another layer of fan-out/fan-in, ultimately. But they should have been done that way to begin with. One of the worst bits of this is that it forced an ordering on the execution of the processing tasks that was not, strictly, necessary except for this distributed logic. Imagine a flow more like this:
In -> Task A -> Out
|
v
-> Task B ->
|
v
-> Task C ->
Versus this: In -> Task A -> Out
-> Task B ->
-> Task C ->
-> Task D ->
(With some of those tasks being composed of multiple tasks, but in a more coherent way now.)
* insecure
* doesn't work
* lack of checks and validation leading to unhandled states
* inconsistent style and method
* difficult to understand for wrong reasons
* spaghetti with too much coupling
* poorly optimized and inefficient
Bad code to me is code that is difficult to modify or extend. “Difficult” to me implies a higher likely hood of regressions or time spent working.
Probably the worst thing of all is the “wrong abstraction”. Concepts that have been shoehorned into other abstractions - bloating method signatures and variables - obfuscating the intent of the names. This issue becomes cancer that causes a ton of other problems.
Great code reads closer to English language. The bottom of the call stack has high level abstract language and the top of the call stack is highly specific. With module on the stack maintaining a consistent degree of abstract language.
Didn’t believe in this cargo from the beginning. Instead I’m watching the world change. When you work with code and understand what it does and all of the implications, these rules are irrelevant, because you have your own sense of when it’s time to (…).
To me bad code is basically two things: (1) negligent/inconsistent formatting, (2) when you read it you can’t grasp what it does. Not because of complex algorithms, but because it is an abomination consisting of tens of files, 5 meaningful lines each, and logic mostly smeared across imports.
My latest favorite was an abstract factory of a single server constructed from specific implementations of property accessors which combine into a hierarchical non-documented method-structure. All it does is listening on a tcp port, regex-parsing messages like
command; key:value; key:value; …; msg:…
replying “command OK” and calling back .on(“command”, (data_structure_class) => {}). Too bad it parses it wrongly when msg has “;” in it, which is okay according to the spec. 17 files, countless boilerplate lines only to fail in a trivial task, which is basically a one-pager.
If not, there's something wrong... if you can't divine the author's intention at a glance, you and anyone else looking at it will find it hard to spot bugs that deviate from that intention.
A subset of this is that the code should look like the language was written for it. Unsuitable use of design patterns is probably the biggest violation. And often you need to put a lot of thought into understanding both the language and the business domain.
Now, this does include the uses of the developers working on it throughout its lifestyle, but what makes code 'good' when it comes to a bank's system vs. a SaaS startup vs. a script somebody writes for themselves and themselves alone can be completely different.
Trying to set out concrete rules for what makes something good practice is difficult when it's something as varied as coding.
2. Incorrect, including such issues such as insecure, not filling the specification.