https://www.linkedin.com/posts/pete-heard-lr_git-activity-7129496167645618176-Viyp
I am curious if anyone else has any thoughts regarding this matter.
But also the point of code review is not just to catch mistakes, suggest improvements, etc, but as a way to learn and keep up-to-date with the code base, and also to pick up tips or whatever from other developers.
And I guess a meta-point is it's one of a number of practices that makes the team a team.
I think what constitutes high velocity or best practice is going to be highly dependent on the context. What's the risk and consequences of a mistake, for example?
If you are making some Crud SaaS and you are in an early/startup setting then obviously you can probably correct a mistake pretty quickly. Perhaps you haven't even shipped the product publicly. Worst case a service is unavailable or some data is deleted that can be recovered from backup. Bad day at work but you'll recover.
But what if you are in a security critical business? What if your code is deployed to customers and it writes files using a file format that must be possible to read forever, and this code contains a bug making those files incorrect?
So: your review process, source control workflow, testing method, CI method, etc. will depend on the context, just like the best architecture. (insert Simpson meme with "-Say the line senior developer -it depends").
Anyone who tries to offer "best practices" without caveats is either misinformed or selling snake oil (LinkedIn is the place to go if you want both of those).
I’m in a team where members have a very high level of trust yet we’re still very strict on code reviews, because it helps catch mistakes, things that were forgotten or makes sure things were thought about. It’s a quality gate as well as a knowledge sharing tool (both from reviewer to reviewee and also about the code changes being made). None of these have anything to do with trust.
Code reviews have nothing to do with “trust”. It is another pair of eyes proof reading your code and providing at least a sanity check of the logic.
Think of this way. When you were in college you wouldn’t turn in an important paper without anyone proofreading it right? Well the code you write deserves at least as much rigor as your sociology term paper did.
Code review protects from sloppy thinking and poor planning very easily. "I don't know what this is, or what it does, where is the design for this?" pumps the brakes on those sorts of changes.
Code review as a means of ensuring code quality (whatever that means) is secondary, and it requires much more of a time commitment on the part of the reviewer. This is what can make code review slow, and this is the fat to cut, to make code review faster. It's probably not worth 45 minutes of developer time to generate a vast plethora of nit picks and discussions about what is idiomatic. For a new hire / junior engineer: maybe worth it. Under steady-state conditions: definitely not.
On the other hand I've worked with people who need more reminders (eventually we had to discontinue our relationship) to do the right thing:
Does the PR make sense?
Did you format the code?
Did the tests pass?
Did you fix the tests? Or did you just comment them out?
And for the people I trust, it's because they insanely high quality of code, consistently. And even for them, while I DO trust them, I prefer PRs.
4 eyes see more, and we have different mindsets.
It’s about risk management, communication and teamwork.
Development at scale is a team sport. There’s a difference and it’s simple starting with…
- Does my team understand this change?
- Is there anything I missed that would have a negative impact on others?
I once worked in a team where people did code-reviews within 2 hours of a pull-request being published. We moved so much faster as a team than any other team I was a part of. I bet there is a high correlation between team velocity and code-review latency
It's easier to provide feedback and make changes prior to merging. There is usually value in the feedback that improves the quality of the merged code, whether a reviewer is requesting a change to adhere to code standards (I know, enforcement should be automated), or suggesting a different way to implement the change.
Yes, it's slow at the start but the review process made sure there's context and a common pattern emerging, which over time pays off in velocity. We have little tech debt. I wish I could say the same about merge first. It's really not about trust at all.
I interpret that as saying: trust the process (of xtreme programming), not trust the other developer (to be a superhuman that never creates serious bugs?).
I'm more curious about the details of how to organize this than any principle objections. Let's assume we are going to do review-after-merge, what would it take to make it successful? I think its a good thought experiment.
I'm assuming we want most, if not all of our changes that are going into production to be reviewed. So there's this branch that is getting constant updates of unreviewed code, lets call it trunk, from which we want to do a release, but we want all those unreleased, unreviewed commits to be reviewed before the release.
How to keep track of what has been reviewed and how often do you review it? Will you review a massive diff with unrelated changes every once in a while and freeze trunk while doing so? Or will you review individual pull requests as they were at the time of merging them? Will you review collectively?
I guess it depends on how often you want to release and with how much risk.
With unreviewed code, you can more easily point fingers at someone else. Even if there's no need for that, when it is time to update the code base again, now only one person is familiar with the code, so others are less likely to take it on.
With all the trust and good faith in the world, humans are still fallible. "High velocity" isn't helpful if you've lost control.
I trust myself. I still want someone to review my work. If nothing else than to essentially catch typos.
You can’t proofread your own output.
You still have to trust your team to actually do reviews.
If so, sure that probably saves time. Most reviews should be uneventful anyway and it broadens the scope of review to the whole branch. That's helpful when you have lots of moving parts and multiple devs working on the same feature.
My opinion on this is that code review must occur especially if there is no other form of review. That's usually more technical projects where the devs are left alone. Contrary to popular belief there really are some projects so short-lived, truly throwaway code such as light web dev (marketing pages and whatnot), that final proofreading and QA are good enough.
If you think you can deal with the quality hit then yeah I guess just go crazy, spurt out another buggy pile, just don't bother me about it.
Perhaps it should be possible to bypass code review and smoke testing for minor changes to code comments that do not change production artifacts.
Another problem is creating sufficiently appropriate test infrastructure to retest the area(s) touched and exponentially/phase deploy looking for positive and negative signals.
It has nothing to do with trust, if I make code reviews a must have. Everybody makes mistake, no matter how good they are.
Trust is build, that even I'm the CTO and one of the founders, I also cannot push stuff, just because I like. I have to go through the same process as all others. Code Review and QA. No shortcuts.
This gives my people the idea, that I do trust them to check even the "big boss" of the company and yes I have to fix my shit, if they find something.
Because it goes back to my first point: Everybody makes mistakes and there is nothing bad about it.
So basically no.
It's one of the reasons that I write my tests before the code, I know that if I write the code first then I'm less likely to write any tests for it.
Code reviews are not about trust. They are about 1) educating your teammates on the whole codebase slowly, PR by PR, 2) a second pair of eyes on your solution to a problem (perhaps it could be solved another way simpler or perhaps there is a subtle bug)?
i.e. trust engineerings ability to judge their PR as trivial.
This requires trusting good intent (which I think is healthy), and trusting their self awareness.
I work with people I do "trust" and my code-review is usually "LGTM" -- often without even looking. Most code doesn't go right to production and can be fixed if there are bugs (which I'm unlikely to find by looking at it, anyway.)
The things I look at are things that are hard to change in the future: big architectural decisions, schema (db and data format) changes and things like that.
More junior and newer people, I tend to look more closely and give more advice.
Personally, I get annoyed when people feel they need to critique something to make it clear they reviewed the code. In most cases, the feedback I get from reviewers is not consequential -- and just slows things down. But I have certainly had cases where the reviewer made great points and I changed the code to fit.
For example, if my team was very junior, I'd be less inclined to merge first, but if it was very senior, I'd be more inclined. But even then, it depends on other factors.
[0] It's not mine, it's my employers.
Sometimes speed for speed's sake is a bad idea.