What are the best practices that one should follow?
Once I'm convinced I have tests for most code paths(Unit if I'm being fussy, automated integration tests if I'm being lazy), I mentally go through and look at everything impure and think "What if that operation fails", at a fine grained level.
Sometimes I even put garbage code in, to simulate the failure.
If I do 3 network requests, I want to be sure that it won't crash if the internet goes down between the 2nd and 3rd, or if the drive runs out of space between writing the first and second file.
And of course, since everything I'm actually interested in working on is large and complex, I rely pretty heavily on reuse, which means my first line of defense is what other people say about stuff. If Google and Amazon trust it, I might to, if it's got 4 stars on GitHub, I'll probably stay away unless there's no alternative and it's small enough to debug myself without unreasonable effort.
I also like to have the computer help me with the reasoning as much as possible. My likes and dislikes as far as languages are heavily influenced by how well the linters work. If something is too freeform and allows lots of stuff then linter doesn't understand, I'll pass on that.
http://pespmc1.vub.ac.be/REQVAR.html
which, practically, means you need to bring different tools to bear for different problems depending on the nature of the problem.
At some phases of some projects almost all the uncertainty involved is around the behavior of the framework you're working inside and you can unit test until you are blue in the face and it won't do you any good.
Some code is straightforward and barely needs tests, other code (say string parsing) involves well defined functions with inputs and outputs that are tricky to implement and starting with test cases is really the way to go. (Get an IDE with a good visual debugger if you don't already have one, tests together with the developer are awesome.)
If an algorithm is tricky at all I will look it up in an algorithm book. I definitely have made formal proofs that an algorithm worked when I wasn't sure.
This book I think makes the best case for unit tests I've ever seen
https://www.amazon.com/Working-Effectively-Legacy-Michael-Fe...
and particularly emphasizes that unit tests need to be really fast because you're going to run them hundreds or thousands of times.
However there are needs for testing that don't fit that model and you have to fit them in that process. I wrote a multithreaded "service" and also a "super hammer" test that spawned 1000 threads and tried to provoke a race condition for 40 seconds, which is way too long to be part of your ordinary build process. You can spend anywhere from 2 minutes to 2 months training a neural network and you never 100% sure that the model you built is good (able to be put in front of customers) unless you test it. The pros frequently build several models and pick the best. It's not something you can afford to do every time you "mvn install" however, so you have to develop a process that addresses the real problems in front of you and that does not get bogged down trying things inappropriate for your problem.
lately, I haven't even added a bunch of feature ideas because then I lose motivation to work on it.(side projects)
https://news.ycombinator.com/item?id=35640137
https://www.quantamagazine.org/the-number-15-describes-the-s...