On many of our engagements we are asked to review code that has been developed either in-house or by an offshore team. We try to first sit in on a code review done by the teams, without our help, so we can see what works and what doesn’t. We’ve compiled a list of the things that we see that lead to productive code reviews. There are many others we could list but these are are the most important aspects.
Set Clear Expectations on Standards Ahead of Time
We will completely dedicate a later post to effective standards of code review. But the key here is to HAVE standards and communicate the standards with the team ahead of time. Just how prescriptive your standards should be will really vary with your team. For a smaller team in a single office working closely with business users, you probably can give a little more free reign. For a team working separately on three continents that never meets your business team face-to-face, you probably need stricter standards that are very specific. These standards include things like error handling, logging/auditing, security, naming standards, technology stack, and even things like code style and whitespace.
Encourage Productive Confrontation
Confrontation can be productive, and in our experience, it is a key hallmark of any mature, well-functioning team. Team members should be encouraged to provide constructive feedback and to challenge assumptions. This fosters confidence but also brings out issues, that if left to fester without being addressed, can destroy team cohesion and moral. Like science, software development works better when peers feel comfortable questioning decisions and forcing developers to think about alternatives.
Discourage Nitpicking/Personal Attacks
Confrontation is good, but not when it devolves into personal attacks or nitpicking minor details. The team leader needs to watch for both of these; once they start to happen regularly, people naturally respond by being less confrontational. A good barometer for whether a challenge is too myopically focused is asking what the benefit to the team would be if everyone followed the request in the challenge. For instance, if everyone followed guidelines for writing a good hashCode() implementation, large HashMaps and other caching mechanisms will have more predictive and scalable behavior. It’s probably worth following this challenge. On the other hand, if every developer calls if (logger.isDebugEnabled()) prior to emitting debug output, you may on the high end expect a 0.001% improvement in performance. While this isn’t a bad thing, it probably isn’t worth spending much time on in a code review.
Measure and Review Results
The point of the code review is not to embarrass developers or impress them with esoteric knowledge of how to most efficiently synchronize a 64 bit local variable across multiple threads in Java. The point is to improve the quality of the code. To this end, someone should document the changes that are suggested and follow up after an agreed upon time to make sure the recommended changes are followed. This sounds silly, but very rarely do we see teams actually follow through on this. We recommend starting every code review with a quick summary of the results from the last code review. Anything that is not addressed should go on the docket for the next review.
Many of the things that are followed in code reviews can be automated in Jenkins or Hudson. There are literally thousands of plugins to support code quality, test coverage, test automation, code style, and to make recommendations for refactoring. These are highly recommended because they offer historic logging of results and allow for management to monitor code quality objectively over time. Although they can be fairly easily incorporated into any continuous integration process, keep in mind that there is a fair amount of planning that must go into this. This recent post gives some excellent guidelines on the things that drive code quality. We will later write many posts that specifically demonstrate how to achieve better development process automation with Jenkins.