From the Wikipedia, a code review is defined as "a systematic examination of computer source code intended to find and fix mistakes overlooked in the initial development phase, improving both the overall quality of software and the developers' skills". A great deal of evidence, both empirical and anecdotal, has supported the effectiveness of code reviews, and most would consider them to be a best practice in software development. In general, the main benefits are:
- Improve internal quality of the system (e.g. maintainability, extensibility, etc.)
- Improve external quality of the system (e.g. reduce bugs, improve quality)
- Facilitate the transfer of critical project-related knowledge amongst team members.
- Provides a format for the mentoring of junior developers.
Some less obvious or widely touted, but equally valuable benefits from code reviews are:
- Spot opportunities for re-use, thereby reducing code redundancy.
- Enforce consistency in development practices and application of coding styles and patterns.
- Provide managers with a measure of progress during the construction phase.
- Keep developers motivated to perform higher quality and quantity work via a peer pressure mechanism.
Fagan Inspections are typically viewed as the canonical form, however code reviews can come in a number of flavors:
- Fagan Inspection: The most formal and rigorous of code review techniques. Involves the public review of one developer’s source code by a set of reviewers, facilitated by a moderator.
- Walk-through: A less formal, perhaps impromptu meeting where developers read through code in real time with the intent of detecting errors.
- Pair Programming: A practice whereby two developers work together at one keyboard. Note I'm not a huge fan of pair programming.
- Pass-around: Code is distributed and reviewed independently by developers, and comments are then provided to the author.
- Automated Review: Code is mechanically inspected by static source code analysis tools.
Recognizing this, that the “fit†of the code review process in an organization is critical to its success, it would be unwise to rigidly recommend one process for developers to dogmatically apply across all clients. Instead, what is needed is a code review process that is effective “out-of-the-box†for our typical projects and customers, but ultimately adaptable, acknowledging that no project or customer is ever really “typicalâ€. Additionally, the process must also be relatively familiar to most developers (e.g. similar to Fagan, etc.), must not be overly demanding or time intensive, and must mesh well with other SIP processes (e.g. RUP, Scrum, etc.). The process defined below aims to achieve each of these goals.
Process
The diagram below illustrates the baseline process for a code review. Note that for the purposes of this process, there are just two roles – authors and reviewers. Typically, the set of reviewers will consist only of developers on the project team, but could also include other project stakeholders (e.g. for validating business logic, etc.).

1. Identify Files: After being informed of the segment of code to be reviewed (see "What code to review"), the author identifies a set of source files (e.g. configuration files, UI code, scripts, etc.) and possibly requirement or design documents (if they are germane to the review) and emails this list to all reviewers along with any caveats (e.g. "don't look at method X, because it's not finished", etc.). This list of files should take no more than one hour to compile and should be sent at least 4 days prior to the meeting, giving reviewers enough time to inspect the code.
2. Review Code: Upon receiving the list of files from the Author, each reviewer reviews the code on his own. Among other things, reviewers are looking for functional correctness, style, clarity, extensibility, reusability, and performance. For each project, a code review checklist can be created (in the SDG or on the wiki) and then referenced by the reviewer to ensure that code adheres to the project-specific conventions and standards. The author records all issues, suggestions for improvement, or kudos found in the code, organizes them by file, and then emails this feedback to the author at least one day prior to the meeting. Reviewers should cap the amount of time spent reviewing code at one hour (data suggests that there are diminishing returns after 60 minutes, and investing more than an hour is too taxing on the individual Reviewer – he has his own work to do too!).
3. Review Feedback: After receiving all feedback from Reviewers, the Author should examine each comment and determine (a) whether the comment should be addressed (e.g. fixed, refactored, etc.) and (b) whether the comment merits extra discussion among the group. For example, feedback about a missing method comment is easily addressable (just add the comment!) and doesn’t warrant a full discussion. On the other hand, a question about a coding convention may indicate the lack of a standard, and should be discussed to garner consensus. The author should then compile a list of all comments from the Reviewers, mark those comments that need to be discussed, and print or email a copy for each Reviewer prior to the meeting.
4. Discuss Code: Only when the Reviewers have examined the code and the Author has filtered the list of items to discuss should everyone get together and talk. At the meeting, the Author facilitates the meeting, stepping through each discussion item and making notes of any resolutions or decisions. The meeting should move briskly, but should not be rushed – good, substantive discussion is the goal, and this takes time. As a rule of thumb, the meeting should last no longer than one hour. Any items not discussed or resolved should be tabled for another meeting or figured out independently by the Author or tech lead. Meetings that last over an hour typically leave participants feeling frustrated and less interested in doing a code review again.
5. Implement Changes: Within a few days of the meeting, the Author cycles through the list of comments from the review and makes changes to the code where appropriate. In some cases it is useful to post the feedback, resolutions, or decisions to a wiki, but only if there is time (this is an agile process!).
Best Practices
The following practices have been proven to be generally helpful when performing code reviews:
What code to review: In general, the project’s technical lead is the most appropriate person to select the code to review, however the project manager or team as a collective could decide as well. When selecting code to review, the following general rules should be considered:
- The first review of any project should target a senior developer’s code, setting the tone that no developer writes perfect code and any developer can find valid issues in any other developer’s code.
- Critical or risky pieces of code (e.g. key business logic, integration points, reusable libraries) should take precedence over non-critical, safer pieces of code.
- Over the course of a project, each developer should have their code be reviewed. No one on the team should be absolved, neither the most junior nor the most senior.
- A segment of code to be reviewed should include no more than 10 files.
Use static code analysis tools prior to review: Static code analysis tools like PMD and Checkstyle can be extremely useful in rooting out syntactic and stylistic issues in the code more quickly than a manual review. See automated review for standard configurations and best practices.
Be careful of feelings: Developers by their nature often feel quite attached to their work. Criticisms of code can easily be perceived as criticisms of the person, and so it’s common that authors leave feeling hurt, angry, or scorned. Because of this, reviewers need to be keenly aware of the social component in code reviews, being careful to praise more than criticize, focus on the code and not the person, ask questions rather than make statements, and in general be tactful and build trust. As Karl Wiegers says, “bugs are the bad guy in the review, not the authorâ€.
Document coding standard: Oftentimes code reviews will expose differences in opinion about coding standards, best practices, or patterns. These disagreements are extremely useful, as they are the first step toward consensus. In fact, code reviews are sometimes the only opportunities developers get to discuss and resolve such issues in the course of a development cycle – and catching them quickly is helpful for ensuring coding consistency and reducing redundancy. As agreement is reached, however, it is important to document this in a knowledge repository (e.g. Wiki) or in an SDG, SAD, or SDP to ensure that the conflict can be quickly resolved in the future.
Find a management advocate: In staff augmentation projects, garnering support from management is imperative - without it, the time spent cannot be justified (e.g. on time-sheets, etc.), developers will resist or thwart their adoption, and the practice will degenerate as other time pressures come to the fore. Oftentimes, managers will be supportive of a code review practice, however, when they are not, it is helpful to present the benefits (both intangible and hard data) and note that code reviews are a common, proven practice adopted by countless other mature software organizations. If support is not there, code reviews can still be on off hours (e.g. lunches, early mornings, etc.).
Assign a watch-dog: Most developers know that code reviews are good for them, but like exercise or vegetables, they're easily neglected. Therefore, it's important to have one person responsible for managing, coaxing, and otherwise nagging developers about code reviews. This most often is the responsibility of the Technical Lead on the project, but could also be the project manager.
Review code every week: A common practice is to review code prior to the completion of a requirement, using it as a code quality gateway that developers must cross through before a final check-in. While this seems ideal in theory, in practice the logistics often become intractibly complicated. In most development organizations, developers complete features at the same time (i.e. before the end of the iteration), and so each developer needs each other developer to review code at the very time that no one has any time to spare. First one review gets postponed, then another, and before you know it, there are no more reviews. A more sustainable practice is the rotating code review, where each week a different developer's code gets reviewed by the team. While this doesn't guarantee that code is reviewed at the most optimal time, it does set a nice cadence and predictability to reviews, and helps install a healthy amount of fear in developers which in turn foments better coding practices (i.e. "it's possible that my code will be reviewed tomorrow, so I better write good code today"). In general, the more often you do code reviews, the easier, quicker, and more effective they will be.
Account for Code Reviews in Estimates: Using the process defined above, code reviews require only a minimal time investment from each team member - about 5 hours for the author and 2 hours for each reviewer. In most cases, for both project work and staff augmentation, this time can be factored into the normal project schedule (like unit testing, documentation, etc.).
Hold review meetings over lunch: A good way to ensure that code reviews don't intrude on normal, burn-downable project work is to hold the meetings over lunch. This also helps make code reviews less formal and a bit more fun.
Use code reviews as a vehicle to discuss architecture and design: On many projects, there is no separate, allocated time for team members to discuss design or architecture issues. If this is the case, Code Reviews can serve this purpose as well. As a discussion about one particular method unravels into a more abstract discussion about performance or extensibility, it's useful to not cut this discussion short, as it can expose important gaps in the architecture or design.
Code reviews should not be performance reviews: In the rare cases where project managers, business analysts, or even clients are involved in a code review, it's crucial for the tech lead to ensure that all participants understand that the code review should never be used as a way to assess the author's competence or performance - it should only be used as a tool for improving code and sharing knowledge. A good code review depends on open, honest dialog, and this is subverted when participants feel as though they are being critiqued.
Resources
- Code Complete, Steve McConnel. Chapter 21.3: Formal Inspections
- Peer Reviews in Software: A Practical Guide, Karl Wiegers (November 2002)
- Part of Your Complete Breakfast: Code Review is a Source of Essential Vitamins and Minerals, Stephanie Lussier (2002)
- Pair Programming vs. Code Reviews, Jeff Attwood (November 2007)
- Code Reviews without the Pain, Robert Boque
- Seven Truths about Peer Reviews, Karl Wiegers
- Code Reviews, Srivaths Sankaran
- What Makes a Code Review Trustworthy?, Stacy Nelson and Johan Shumann
Evidence
In Code Complete, Steve McConnell describes specific studies that indicate that code reviews do indeed improve the overall quality a software system. Below are some of the more compelling statistics in favor of code reviews:
- “Individual inspections typically catch about 60 percent of defects, which is higher than other techniques except prototyping and high-volume beta testing.â€
- “The combination of design and code inspections usually removes 70-85 percent or more of the defects in a product.â€
- “On a project that uses inspections for design and code, the inspections will take up about 10-15 percent of project budget and will typically reduce overall project cost.â€
In addition, Karl Wiegers cites some compelling statistics from industry, in Seven Truths about Peer Reviews:
- Hewlett-Packard’s inspection program measured a return on investment of 10 to 1, saving an estimated $21.4 million per year. Design inspections reduced time to market by 1.8 months on one project [5].
- Inspections contributed to a ten-fold improvement in quality and a 14 percent increase in productivity at AT&T Bell Laboratories [7].