Rapidly boost code review efficiency for sizeable distributed teams
Not surprisingly, there's already a lot of literature on what makes a good Code Review - also known as Merge Request or Pull Request; e.g. "What to Look for in a Code Review" by Trisha Gee. Code Review can also be found as a part of a technical interview.
However, what I want to focus on is the process of code review in its entirety that takes part in one or more teams - typically split into the following parts: the creation of the CR request, the actual act of giving feedback by one or more reviewers, the subsequent amendments as a result of applying the feedback and asking for a subsequent review and finally, closing the Code Review either through merging the changes into the target branch or abandoning the changes.
Trouble is, once you have some sort of Code Review process guidelines in place, how do you get large teams to learn from these guidelines and change their behaviour during the Code review process accordingly? Yes, I know, I just stated pretty much the whole mission of technical leadership. Creating comprehensive guidelines on all of the steps of the code review process, for both actors - code author and reviewer - is a good start, but definitely not enough in order to yield results.
Only a few will read the guidelines
The ones who do, will forget most of the details, without actively practicing. A few things will stick, depending on what each individual is prone to remember, and it can work as a "go-to" place for beginners. Still, as time passes, folks will forget they exist.
you can link them in the Code Review template; yet again, do not expect too many people to click and skim through them. Clicking and opening a new page from a CR is an effort most won't take.
A Code Review template has better odds to yield results, if well tailored!
There's a good read on templates here, mentioning one example here and another here. I won't share with you yet another template, but I'd like to walk you through the most important bits when designing it for your team. The purpose of the template is three-folded:
👍 to inflict the good habits you listed down in the detailed guidelines that no-one reads
✌️ to as many people as possible
👌 on the long term
The CR template is the default free-text body of the Code Review. Therefore, it is probably the most precious and "expensive" advertising place amongst engineers in your org, so choose your words wisely, and don't overdo it. Keep it short and efficient. Give the authors the choice not to fill-in the template, instead erase it and do their own thing in the free-text area. I chose to mention that upfront, in the template.
Author should be reminded about creating the best possible experience for the reviewer
The body should include any non-obvious reason what the code change is about, how it tries to achieve that and why. If the change is one commit with a good enough commit message, then fine, just point to it. I've heard engineers say "but I am already required to link a workitem on my backlog". I don't believe that's enough: we all know the User Story is not about the "codez" so much as the acceptance criteria and the business request; e.g. it won't mention the reason why the code author moved one class from one package to another. That's for the author to explain in the CR. And few reviewers will actually open and read the workitem to gain context (kudos to you if you do!).
The Code Change should be as final as possible. Don't submit code for review that's not production-ready - unless you're asking for a quick feedback, which is totally fine, but a different process. To check upon this, in my team's template we're asking the author to write exactly how the change was tested. Manually, with fabricated inputs? Unit-tested? End to end? On a feature-testing build? Which one?
Reviewer should be reminded what's important to touch upon
And of course, to be kind 😊. Although I haven't seen this anywhere else, we chose to include a short checklist for the reviewer, as a read-only section in the template. The checklist will vary heavily on the nature of the project, so each technical lead or manager of a delivery team should have a strong say on what should be included here.
Make it fun
Or create some different intrinsic drive for both the author and the reviewer to put in the effort - don't make it sound like another chore. Ask for some creative input. Ask how they felt coding that. Occasionally publicly praise great Code Reviews, pointing to what made it stand out. Don't force it, pick genuinely good code review requests, or code reviews. I've seen people include before&after screenshots, performance metrics, or JavaScript bundle sizes comparisons.
Success indicator?
When we started this out with multiple teams in my org, I received an interesting question: How would we tell the template was successful? What would this metric be?
I would like to address this question in a separate follow-up post. Reviewing is not quantifiable. It's all up to professional judgement. What we would need to do is sample Code Reviews before and after using the template, and subjectively judge how much it made a difference.
Subscribe to my newsletter
Read articles from Mihnea Simian directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
Mihnea Simian
Mihnea Simian
software engineer • engineering lead • vimer • film photographer • kart racer • snowboarder • barista • skipper wannabe