We've likened it to having someone review a paper you've written: you often read what you think you wrote, not what's actually written.
This got me to questioning what others have found to be transformative in their development practices.
My second most important change was to learn how to use contract based programming, or, if the language has poor support, at least to use a ton of asserts. This, for me, feels like stabilising the code very quickly and, again, improved my bug/line ratio. It forces me to encode what I expect and the code will relentlessly and instantly tell me when my assumptions are wrong so I can go back and fix that before continuing to base the next steps on broken assumptions.
Even for legacy projects, starting with adding regression tests for every bug you find is a great way to introduce testing. And when you add new features, you can write tests for that as well.
I find that a FP mindset is helpful mostly because it tends to reduce the amount of global or spread-around state. This in turn makes testing and quickly iterating in a REPL a lot easier. Also when later the time comes to debug, it's much simpler if you don't have a huge amount of state to set up first.
And even if a lot of state is required, having it be an explicit input to a procedure is helpful because it makes it much clearer what you need to set up when doing manual testing.
Treat code mess with the same techniques you would treat RL mess. Sometimes you sweep it under the rug. You can toss it in a closet or attic. You can buy a shelf or box and toss everything in there.
You can clean it up seasonally, like set aside a sprint for it.
Some kinds of messes are hazardous and absolutely should not be tolerated - this is similar to leaving milk out or trash piling. Many people make this mistake of assuming that because some messes are very dangerous, all of it is too.
Most messes will slow you down, but cost more to clean up. Overall, I've seen documentation do more harm than help - it's often faster to interrupt a colleague doing something than for that colleague to spend weeks documenting and updating documentation on something that gets thrown away.
Some will take more time to fix later than now - this is what people mean by tech debt. But it's less common than it seems.
With a lot of mess piles, the important parts float to the top and the less important ones sink at the bottom. God classes are mess piles.
Once the mess becomes a burden, it's a good time to clean up.
You may need a few janitors and landscapers, especially for a large code base.
Some people place much higher priority on cleanliness than others. Be respectful of them but you don't have to be like them.
In short, learning new programming paradigms completely changed my view of programming and these skills can translate over to more "mainstream" languages, so it is still a worthwhile effort.
E.g.:
new(state, &args) => mutation
apply(state, mutation) => state'
Applied to finance: newPayment(balanceSheet, amount, date) => payment
apply(balanceSheet, payment) => balanceSheet'
newDiscount(balanceSheet', amount, date) => discount
apply(balanceSheet', discount) => balanceSheet''
Applied to chess: newMovement(board, K, E2) => movement
apply(board, movement) => board'
newMovement(board', b, D6) => movement'
apply(board', movement') => board''
As a bonus, it's clear where to enforce invariants and fail early: newMovement(board', K, E3) => Illegal Movement (Can't move to E3)
newMovement(board'', K, E2) => Illegal Movement (White King not available on board)
apply(board''', randomMovement) => Illegal Play
I'm working on extending auto reloading to all of the assets in the project because I know that tight feedback loops are that important.
REPL means Read-Eval-Print-Loop and its a place where you can run code immediately and get immediate feedback. For example F12 in your browser and use the console to do 1+1. But you can also use that console to interact with your code in the browser, the DOM and make http requests.
But I also see REPL as a principle - to get as quick feedback from the code you write as possible, and things that help this are:
* Quick compile times
* Unit tests
* Continuous integration
So that each stage is as quick as possible. I write code and within as second or two know if it failed a test. Within a few seconds maybe I can play with the product locally to test it manually. Once committed, I can see it in production or a staging environment at least pretty quickly.
You can then have bigger REPL loops where a product manager can see your initial code fairly quickly, give you feedback and you can get stated on that right away and get it out again for review quickly.
I don't think there is any excuse not to work like this given the explosion of tooling in the last 20 years to help.
2. YAGNI
Writing over elaborate code because it is fun! That's fun at first but you soon learn it's better to write what is needed now. There is a balance and writing absolutely shit code is not an excuse, but adding too generic code because of stuff that might happen is also a problem.
When I'm able to greenfield something myself, and use this from day one, I tend to very naturally end up with the "well-separated monolith" that some people are starting to talk about. I have on multiple occasions had the experience where I realize I need to pull some fairly significant chunk of my project out so it can run on its own server for some reason, and it's been a less-than-one-day project each time on these projects. It's not because I'm uniquely awesome, it's because keeping everything testable means it has to be code where I've already clearly broken out what it needs to function, and how to have it function in isolation, so when it actually has to function in real isolation, it's very clear what needs to be done and how.
Of all the changes I've made to my coding practice over my 20+ years, I think that's the biggest one. It crosses languages. At times I've written my own unit test framework when I'm in some obscure place that doesn't already have one. It crosses environments. It crosses frontend, backend, command line, batch, and real-time pipeline processing. You need to practice writing testable code, and the best way to do it is to just start doing it on anything you greenfield.
My standard "start a new greenfield project" plan now is "create git repo, set up a 'hello world' executable, install test suite and add pre-commit hook to ensure it passes". Usually I add what static analysis may be available, too, and also put it into the pre-commit hook right away. If you do it as you go along, it's cheap, and more than pays for itself. If you try to retrofit it later.... yeowch.
The concepts that influenced me the most were immutability and the power of mapping+filtering. Whenever I read a for/while loop now, I’ll attempt to convert it to the FP equivalent in my head so that I can understand it better.
SQL, Python, PHP, JavaScript, and many aspects of Unix and the C/Make toolchain all come to mind.
Mind you, I don't like all of the above. At least 2 in particular I definitely wouldn't mind seeing "just go away."
But I do recognize that they have special staying power. And they didn't get this power by accident.
For many (most) systems, there are designs that make perfect sense, but will be hard to debug.
When I started designing so that programs could tell me what they are going to do, what they are doing, and why, and so that their state, wherever possible, could be expressed into a structure where I could "see" the wrongness, my development time went way down--because it was really time spent debugging, so my productivity went way up.
My younger self did not truly understand the strength of modeling or treating data as the reification of a process. I saw everything through the lens of what I knew as a programmer which was, for the early 2000s java developer I was, a collection of accessors, mutators and some memorized pablum about object orientation. I treated the database with contempt (big mistake) and sought to solve all problems through libraries.
Now I can see the relational theory in unix pipes. I can see the call-backs of AWK and XSLT processing a stream of lines/tuples or tree nodes as the player-piano to the punch cards. I understand that applications come and go, but data is forever. I no longer sweat the small stuff and finally feel less the imposter.
Parse, don't validate https://lexi-lambda.github.io/blog/2019/11/05/parse-don-t-va...
All libs/products should be pure functions, with input output documented, making libs/products predictable
Use in-app event sourcing to reduce the need for global states states
DoD https://youtu.be/yy8jQgmhbAU In non bare-metal languages, this will be useful for readability
For errors, return instead of throw
* Automated regression testing and build/deploy pipeline. The machine will do things quickly and repeatedly exactly the same way, given the same input conditions/data.
* TDD. Create tests based on requirements, get one thing working at a time, and refactor with a safety net.
* Mixing FP style and OO style programming to get the best of both worlds.
* Understanding type systems, and how to use types to catch/prevent errors and create meaningful abstractions.
* Good code organization, both in physical on-disk structure and in terms of coupling & cohesion.
* Validate all incoming data and fail fast with good error messaging.
Make your code easy to read at the expense of easy to write. Don't abstract unless you know exactly the use case. If the use case is not immediately forthcoming, YAGNI (you aint gonna need it). Related, don't be clever. Clever is cute and all. It does not belong in production code.
Any line, function, workflow, etc can fail. You need to have worked through all the failure cases and know how you are going to handle them.
Somethings don't need to scale, ever. But most things I work on do. If I don't know the story on how I can ramp up an implementation a few orders of magnitude, then I can't say I've designed it well.
Aside from that, measure everything. Metrics, telemetry, structured logging. Design for failure and design for understanding what happened to cause the failure. Data will tell you what you are doing, not what you think you are doing.
There was a post here yesterday about a neat map thing. I hit a bug and the author could only rely on their experience with it being potentially slow at times but saying it should work. If there were proper metrics in place, they would know that N% of calls exceed M seconds and cause a timeout. He could then relate this to the underlying API and its performance as the service experiences it. With proper logging, they see what the cache hit ratio is and determine if new cache entries should be added.
Build, measure, learn.
Oh, and automated tests. So much to be written on that.
I liked this article I recently came across discussing the topic: https://lexi-lambda.github.io/blog/2019/11/05/parse-don-t-va...
I'm building internally used software, and getting feedback from stakeholders in the testing environment was hard work, and often impossible.
We went from roughly monthly releases to deploy-when-it's-ready, and this tightened our feedback loops immensely.
Also, when something breaks due to a deployment, it's usually just one or two things that could be responsible, instead of 20+ that went into one monthly release. Waaaay easier to debug with small increments.
Basically it involved writing additional code to set up the app state to the state I need.
It went from changing a line of code, running the app, clicking/filling fields as needed, and then seeing the change reflected, to the app immediately being in the state that I wanted.
Now that I'm working in web-apps, hot-reloading is quite beautiful.
At the team level, probably CI/CD. It forces us to break the monolith into digestible chunks and makes regression testing easier.
Also recently did Thorsten Ball's Interpreter/Compiler books which focus heavily on unit testing the functionality (I can't recommend these books enough).
I now can't imagine going back to a pre-TDD world
Why? Because I’ve tried to actually think about my underlying data structure instead of defaulting to convention but reflex.
The best example? Marcel Weiher Ch 13 in “iOS and MacOS Performance Tuning” explains how to improve lookups in a public transport app by 1000x.
What more? The description of the the data (the code) and the the application (the intended use) are probably way better because thinking about performance is similar to thinking about your data. You want answers fast. Fast answers, fast code.
Also doesn't hurt that it improves Intellisense/Autocomplete, gives better hints/help as you're calling a method, and improves on "self documenting" code.
Distributed version control is one of the greatest things to have gained popularity in the last ~20 years; if you’re not incorporating git, hg, fossil, ... start now.
You gain: concise annotated commits, ease of cloning/transporting, full history, easy branching for feature branches or other logically seperate work, tools like bisecting, blame/annotation, etc., etc.
1. Not being afraid to look at the code of the libraries that my main project depends on. It's a slow, deliberate process to develop this habit and skill. But more importantly, as you keep doing this, you will develop your own tactics of understanding a library's code and design, in a short amount of time.
2. Not worrying about deadlines all the time. Not a programming technique as such, but in a world of standups and agile, sometimes, you tend to work for the standup status. Avoiding that has been a big win.
3. (Something new I've been trying) Practicing fundamentals. I know the popular opinion is to find a project that you can learn a lot from, but that may not always happen. Good athletes work on their fundamentals all the time - Steph Curry shoots like > 100 3 point shots everyday. I'm trying to use that as an inspiration to find some time every week to work on fundamentals.
4. Writing: essays, notes. In general, I've noticed I gain more clarity and confidence when I spend some time writing about a subject. Over time, I've noticed, I've become more efficient in the process.
Basic inline documentation. Who wrote it, when and why. What else did you consider. How does it differ from other solutions. Why is it designed the way it is. Brief history of design changes. What state did you reach in testing. What are the forward looking goals. Takes 5 minutes, pays for itself many times over in future. 90% of effort is maintenance.
I think literally every logger let's you log at a certain point as a single line that does not cover the context of the log.
logger.info('Doing this and that', function () {
do_this()
if(maybe) do_that()
})
This way you know how much the log comment is supposed to cover as well as take a benchmark on the said context.I can never think to just add a single line logger anymore.
This is much better than inserting comments in a code as comments can have no context except the line below.
You can add a function like,
logger.comment
and don't let it log anything to be comment syntax replacement.
Thankfully that was decades ago, which means I enjoyed the magic and bliss for most of my professional career.
Also interesting: I started using state machines in hardware designs before I applied them in software.
They're a super cheap way of
1. allowing feature flags
2. injecting credentials in a way the user thinks about exactly once
3. moving workstation-specific details out of your code repository
They're implemented into the core of most every language in existence (especially shell scripts) and you're probably already using them without knowing. They're (get this) _variables_ for tuning to your _environment_.
Sounds like I'm being sarcastic here (eh, maybe a bit) but it never really hit me until I really dug into the concept.
Other development practices that boosted my output significantly: Regular cardio exercise like running, strength training by lifting weights, and regularly reading source code for pleasure.
https://github.com/nikitavoloboev/dotfiles/blob/master/zsh/a...
For example once we had a project that wanted to add Role Based Access Control and some juniorish engnineer suggested adding boolean columns to the table for each of the user's roles.. Nope. Instead we created a document store w/ roles and defining new roles was as simple as adding a const to a set in the service's code. Good thing too because the number of roles grew from the initially requested ~3 to almost 100. That would have been too wide of a table.
2. Don't get into the trap of refactoring just because you read a new cool way of doing it unless, it's reduces half the size of code or improve performance multi fold. This might waste time as you could write a new feature or plan about it in that time.
3. Write very big elaborative comments. It's for future you as you can't remember everything, sometimes you need to know why you wrote that code or condition as you might not remember why you wrote it at that time.
Try again.
But with a better architecture and nearly zero mistakes. Through using OO correctly ( please read the word correctly)
One step at a time.
Ps. This is mostly when a code-base is detected where a part of the code is sensitive for bugs. Eg. From today: handling better payment providers and payments.
I thought it was important enough to make the change bigger, but with the intention of removing recurring/similar errors in someone else his code-base.
Afterwards, go to him and explain why. Developers can be proud of their code, although it's just shitty code ( eg. Long functions, loops, if else, switch, by reference, ... Is mostly an example of bad code) and never say it's shitty..
- Reading books (e.g. Clean Code, Refactoring: Ruby Edition, Practical Object-Oriented Design, ... these left a mark)
- Going to conferences (Rubyist? Attend a talk by Sandi Metz if you have a chance)
- Testing (TDD at least Unit Testing)
- Yeah, the usual stuff: quick deployments, pull requests, CI/CD ...
- Preferring Composition over Inheritance (in general and except exceptions :)
- Keeping on asking myself: what is the one responsibility of this class/object/component?
- Spending a bit more time on naming things
- Have a side-project! It's a fun way of learning new things (my current one is https://www.ahoymaps.com/ - I could reuse many things that I learnt also in my 9-5)
In short, I have been doing Clojure since last year.
What would end up happening sometimes later in the day I would decide revert or modify the change, so my commit history ends up flooded with bunch of small commits.
I ended up writing a script that checks my current work, and if enough changes or time has past, then a commit is recommended:
https://github.com/stvpn/sunday/blob/master/git-train/git-bo...
I'm pretty much adding fuzz-testing to all my current/new projects now. It doesn't make sense in all contexts, but anything that involves reading/parsing/filtering is ideal - you throw enough random input at your script/package/library and you're almost certain to find issues.
For example I wrote a BASIC interpreter, a couple of simple virtual-machines, and similar scripting-tools recently. Throwing random input at them found crashes almost immediately.
I'd not always been the best learner and this taught me how to learn more effectively. Whilst at the sametime accepting the limitations of the brain.
It led me to find the Anki app and combined with more effective learning, it positively affected my ability to be a better programmer.
1) Start with consistent naming conventions, notation and structures
2) Rely on autocomplete to simplify typing/readability/programming effort. It’s a tremendous force multiplier.
Ninja’d
3) TDD
4) Functional programming
We enforce the architecture as much as possible through protocol conformance, which doesn’t always work, but works extraordinarily well for “Views” or “Scenes”.
It’s taken so much stress out of the development lifecycle for our team.
A broader one: writing expressions as much as possible. Basically, it means avoiding unnecessary mutations (and jumps).
Then avoiding architecture. Thinking algorithms that process data (instead of "systems") has been transformative.
2. If you can, step through any new code with the debugger. You may find the execution path isn't what you thought it was.
Greatly improved readability, even though there was a short adjustment.
2. Figuring out unit and integrations tests
3. Embracing clean code paradigm and SOLID architecture
SELECT this , that , something FROM table
takes a while to get used to