* Nobody reviews it for days because everyone is busy working on their own stuff. This is likely because code review isn't as rewarded as pushing code ("I implemented X" sounds better in a standup than "I reviewed a bunch of PRs yesterday").
* You end up in an endless back and forth with one or more reviewers (e.g. push-review-fix-push-review-fix-push-review...). If this occurs over the course of days, the "fix" portion becomes an interruption where you have to remember the context of the changes, make the changes, then submit them.
These are the two instances I can think of off the top of my head, but if you have other examples and strategies for combating them, I'd love to hear them.
- high-level tests can be submitted before code implementing them exists, they just need to be disabled and then selectively re-enabled in subsequent PRs - this way any logical design issues should surface in tests which should be much cheaper to deliver and iterate on than implementation
- any refactoring is submitted separately (it's easy to verify because it doesn't affect any valuable tests, so it leaves out a whole class of issues), even if it didn't happen "organically" during development - take some of the burden on you so that your reviewers don't have to waste time trying to logically unlink these things in their head - it should be much easier for you, you wrote that code
- features are delivered in an incremental fashion whenever it makes sense - you probably don't need to submit a fully working solution, you can prop it up it with fakes or simply not expose the changes to the user (feature flags, or just simply not hooking up a feature yet)
If you do that, it makes things much more digestible, at some cost of understanding of the overall solution (starting with tests and necessary refactoring should help here describe what you want to achieve) and burden from having to split things. It has worked very well for me, in multiple companies.
The real problem is here. Code review must be a first class citizen of your team culture to be successful. It is the responsibility of management/scrum master/etc to foster and if necessary enforce the culture
1. break large feature branches into smaller PRs so they are more easily digestible by reviewers. If this is not feasible, at the very least, ensure each commit in the PR is concise and meaningful, so reviews can look at the PR commit-by-commit if needed.
2. have specific reviewers or "owners," for different areas of the codebase. Tossing a PR into the abyss without specific reviewers isn't a winning strategy for getting the desired feedback in a timely manner.
3. do not backtrack when requesting PR changes, meaning, each review should only review the code that has been implemented since the last review (don't retroactively ask for changes introduced in the original PR if it's the second or third time reviewing).
4. Code reviews should be celebrated at the team level; they shouldn't be seen as a net negative on one's velocity, rather, they're a net positive for the team's velocity.
5. Be proactive: be an active reviewer of your peers code and they'll feel a bit more urgency in assisting with your PR reviews as well.
My 2c,
Some dev on the internet who spends a lot of time reviewing code
Just ask people nicely. If they do your review promptly and objectively, ask again.
> You end up in an endless back and forth with one or more reviewers (e.g. push-review-fix-push-review-fix-push-review...).
The fault is clearly theirs in this case. They're being obsessive and not showing respect for your time. In any case, if it's one person doing this, you can ask them to pre-review your solution with them before you start implementing it.
- people asking each other to "rubber-stamp" one-liner PRs
- breaking up changes into small PRs holding up progress, because a reviewer requesting a change on an earlier chunk would need to get propagated to all later chunks
- no one wanting to touch large PRs
Our solution was drastic: we abolished code review.
- Big changes, for some definition of "big," go through an initial design review in a Google doc.
- Each feature is owned by exactly one person. You merge code directly into main at whatever pace you choose. This removes the bottleneck of CR.
- All bugs that pertain to your feature come directly to you and you work on them as your first priority. This encourages us to write correct, appropriate, well-tested code.
- If a feature owner leaves the team, the feature is reassigned to someone else. In the rare case when the feature is too hard to hand off, we rip it out and rewrite it. It's obvious that the support burden of such a feature would have been too high. This encourages us to write clear, simple, understandable code.
We're pretty happy with this arrangement so far.
With each revision of our product spec, we work first to write or modify code to meet the new spec and then, once everybody has tested and merged all their changes, we pause new development to review and verify the code. We tend to end up each working on different modules, so we can maintain complete independence in review by each reviewing others' modules with minimal attention to who authored each individual diff. We just keep a spreadsheet tracking all the different files, their authors, and whether they've changed since the last review; at the start of a new review, we assign a reviewer to each file and then the author and reviewer work the file through review-resolve rounds until there are no further findings.
Since everybody is fully focused on the review, it goes by quickly; since there's no expectation of review during development, we can develop quickly without holdups.
2) Code review checklist and a coding standard document. Any review item that's not part of the checklist or a violation of the standard is a "suggestion" and can be ignored. That by itself cuts down the back and forth like you wouldn't believe.
The back and forth is part of the process, but it's a lot shorter once someone has actually read the code.
Also lowering the bar helps. We do review badly formatted stuff and misleading variable names. And definitely check for major bugs. But not poorly named variables and inconsistent architecture, unless it conflicts with a written code style doc.
Part of the delay is thinking of a way to "negotiate" a style change. I'd recommend putting a written style guide to minimize that. If you want it to be camel case or whatever, negotiate it on the style guide.
If the team members have their capacity allocated fully then there is no time to do proper code review. Thus code review is ignored as much as possible or it is seen as time lost from the project. Will also transfom the code review into an action that will be superficial.
Before fixing everything else you must first understand if the team has time to do code review. That means that the expectation is not that they should produce the same output but also do code reviews. If this is the case then the team does not have time for Code Review.
And only after there is time for this, you should go into fixing/improving the process.
this way, it stops being "I reviewed PRs" and becomes "we [the team] merged N commits to master"
in my previous team, I got into the habit of doing code review first thing in the morning, and right before the end of day, so others could either start by checking my review comments, or merge at or near the beginning of the day
it helps if commits are very small
If there's no obvious reward for code reviewing as opposed to implementation, create one.
Do in the morning or end of day, if in the zone.
Gate your PRs with automated checks for anything you really want to enforce (docs, formatting, tests, etc)… but when it comes to everything else, leave that to dedicated code review sessions where the feedback will be less rushed, more thoughtful and far more likely to be taken constructively.
Works a charm for my team - mostly infra/terraform and python.
It does take some culture shifts to make reviews a normal part of the culture. Send reviews to specific people and expect deadlines. (Sometimes this is assigning "code owners" for specific areas. Sometimes this is "have a review buddy".) Add bots and/or calendar reminders if you need to poke someone to get the review done. If "nobody" is the right person to review it, don't have a review at all. It's fine to say that changes to certain areas or below a certain size don't need another reviewer if they pass builds/tests. I've seen teams with bots that auto-merge small PRs after a specified amount of time with no review comments. I've heard of teams that set "these types of PRs are fine to self-merge if" checklists for applications with active QA teams where it's fine to merge code quickly for quick feedback and iterate in further PRs.
> You end up in an endless back and forth with one or more reviewers (e.g. push-review-fix-push-review-fix-push-review...). If this occurs over the course of days, the "fix" portion becomes an interruption where you have to remember the context of the changes, make the changes, then submit them.
First, automate anything that can be automated. Use linters. I prefer the sorts of opinionated linters/formatters at this point that auto-format rather than just complain (prettier for the web stack is a strong example). Once automated remove "style" problems as something to ever mention in a PR once the automated checks have passed.
As a review submitter, find ways to ask where the balances between "cosmetic", "suggestion", "tech debt", and "show stopper" are. Eliminate cosmetic comments (again, if it matters configure it in a linter and automate it away). Work on the confidence to ignore suggestions/tech debt (learn from them, but don't feel compelled to answer every single one with a response or a commit). One key way to to both "ignore suggestions/tech debt" and find that balance between what is actually a "show stopper" is variations on the very simple question "Can we move this to the backlog for a follow up PR?" or "Can I keep this in mind for the next PR for this feature?" A lot of things that sound like show stoppers are merely suggestions, but in the context of a PR it can sometimes feel like everything is important that the reviewer mentions.
As a reviewer, find better ways to set those expectations up front. Mark suggestions as suggestions. Sometimes just adding "Suggestion:" in front does so much to set expectations on how important a note/comment is. Use more phrases like "if you have time" or "something to keep in mind for next time". It's too easy to just note every single thing "wrong" you find and forget to mention that most of what you found "wrong" doesn't matter in short term. I find it is also useful to be proactive sometimes: if it is a tech debt solution that can wait, merge the PR and add the tech debt item to the backlog mentioning the PR without "making a full round" of it.
Some of that is setting expectations for when a PR is "done". It's okay to expect PRs to be "done" quickly and to instead expect follow up PRs. Putting too much pressure on "a PR needs to be perfect" gets in the way of the ideals of continuous integration. You want code interacting sooner rather than later. You can fix up the suggestions and tech debt next time, but it's sometimes important to have that extra time for integration problems to be discovered and QA testing to happen on the "not quite perfect" PRs.
I try to keep my code changes small. I rather have 4 changes in four different PRs, than a single one. Have you ever opened up a PR, saw 25-100 files changed and you just moved on instead of going through it?
Also, I will add a small explanation of what I'm trying to solve, and if I did something "smart" I'll explain why.
Finally I keep a small list of people that tend to approve my changes fast and ping them if enough time has passed.
On the flip side when I see a PR, I will let people know that I'm looking at it OR that I will look at it at a certain time and they should ping me if they don't hear from me (approval or comments). Also I have compiled a checklist that I follow for my PRs which speeds up the process, at least for me.
My PR checklist has three sections, what I should evaluate, how I should communicate and how to write things in the PR itself. It's an accumulation from things I have seen in HN, the Google Code Review process, and reminders to myself. I tend to sound more authoritarian than I mean to be, so it's a reminder to be more gentle.
# Checklist
1. Design.
1. Functionality - Does the code do what it's supposed to do?
1. Complexity - Are individual lines, functions, classes too complex (aka hard to understand quickly).
1. Maintainability - too tightly coupled? Configurations hard-coded.
1. Comments - are comments actually necessary, they should explain why. Are there any comments/TODOs that can now be removed?
1. Documentation - has the appropriate documentation been updated
1. Good Things - comment on the good things as well
1. In general, favor approving a CR once it's in a state where it definitely improves the overall code health of the system being worked on, even if it's not perfect
## Communication
* Accept that many programming decisions are opinions.
* Ask good questions, don't make demands (what do you think about naming this `user_id`?).
* Good questions avoid judgement and avoid assumptions about the author's perspective.
* Ask for clarification ("I didn't understand. Can you clarify?").
* Avoid selective ownership of code (mine, not mine, yours).
* Avoid using terms that could be seen as referring to personal traits (dumb, stupid). Assume everyone is intelligent and well-meaning.
* Be explicit. People don't always understand your intentions online.
* Be humble (I'm not sure - let's look it up).
* Don't use hyperbole (always, never, endlessly, nothing).
* Don't use sarcasm.
* Keep it real. Be yourself.
* Talk synchronously. Post a follow-up comment summarizing the discussion.
## Reviewing Code Communication
* Communicate which ideas you feel strongly about and those you don't. (Nitpick: )
* Identify ways to simplify the code while still solving the problem.
* If discussions turn too philosophical or academic, move the discussion offline to a regular Friday afternoon technique discussion. In the meantime, let the author make the final decision on alternative implementations.
* Offer alternative implementations, but assume the author already considered them. ("What do you think about using a custom validator here?")
* Seek to understand the author's perspective.
* Sign off on the pull request with a or "Ready to merge" comment.
* Remember that you are here to provide feedback, not to be a gatekeeper.