First I skim over the full PR; get an impression of the changes. During I mark areas that require more attention, or any issues I detect. If it's a new feature; I run the code, usually try it out in the app (I do mostly FE and design). Flag any design/usability issues. Basically a bit of QA, try to break it.
After I dive into the code, understand the things I flagged in the first pass. If I can't understand it from the diff, I do some logging, poke at it, and use debuggers. Once I understand everything I go look for bugs, unhandled cases, "risky code". Basically; Will this cause issues now or in the future? Imo this is the most important step.
Lastly I do a quick pass to see if things "make sense"; is there no BS code that needlessly complicates things? Any logic that we already have in our code base? Bad structure? Slow code? Basic sanity checks.
Then I go over my comments (if I have any), make sure they are concise, clear and understandable. I also remove any nitpicks and mark unimportant comments as [minor], or just remove them. Then I write a review comment, usually highlighting the positive things in the PR, as well as summarize my most important findings. Then I submit!
- Read the ticket
- Read the changeset description
- Read a few relevant areas of the code to refresh knowledge and context
- View the commit messages
- Take a pass through the commits, to get a sense of what the author was thinking (or wants to present that they were thinking -- sometimes a confusing distinction, but possible)
- Consider whether the complete changeset achieves the stated goals
- Consider any aspects of the changeset that could lead to problems in future (maintenance, security, code re-use, business agility, ability to communicate/explain the functionality)
- Consider any aspects of the changeset that can be simplified or optimized -- and whether doing so makes sense in context (time/effort tradeoffs)
- Consider deployment issues -- how will we roll this change out, and will it be possible to revert it, or is it one-way? What are the time/effort tradeoffs if it goes wrong?
In practice:
- Get distracted by something that looks like a bug or typo within the first 20 lines of the diff and comment on that
- See a repeated pattern of problems by line 50 that also seems worth commenting on
- Reach the end of the diff and miss the fact that the unit tests aren't covering what they claim due to some subtle issue
- Say it looks OK-ish I guess and approve the change, despite some subconscious sense that something's wrong, somewhere -- never mind, that sense will pass by the time my coffee meeting this afternoon rolls around
(I'm sorta-joking; I think most reviewers are familiar with these problems and learn to develop anti-biasing strategies -- it can be difficult, though)