Friday, December 6, 2013

Code Review - Just Do It! (Part 1 - Feedback)

I cannot over-estimate how important I think code-quality is. And yet, some people believe that code-review is not for them. Perhaps they think it takes too long, or they're too good for being reviewed by other team members.

Recently I gave a talk about the importance of the code review in my opinion. You can find it here:



For those of you who don't speak hebrew, or prefer to just read instead of watching the video, I'll try to summarize here. I split the lecture into 3 different posts, don't forget to follow up for the next posts.

Review - Not only for developers


I guess it wouldn't come to you as a complete surprise that review was not invented by or for the software engineering industry. Students at the university ask peers to review their work and sometime even pay good money for getting a professional review. Authors and journalist wouldn't dream of releasing their writings without them being reviewed by someone (usually more than one). And even this blog is reviewed by native English speakers to make sure I don't write in Hebrish (an un-understandable mixture of Hebrew and English).

How can that be that not all of the software-engineers think their work should be reviewed?

 Feedback is all around us


I have worked in agile methodologies for more than 3 years already. When we try to be agile we try to seek feedback in every step of the way in order to solve problems as soon as possible. In the "Lean Start Up" book, by Eric Ries, it's described as "The Feedback Loop".

 

During our work, we encounter a lot of feedback loops. I tried to map the main feedback loops on my work at Outbrain. I found 6 main feedback loops during a feature's development from when its requirements are passed from the PM to the development team and until it's released:




We start with writing a design, usually followed by a review and feedback from our peers. When we feel comfortable with the design, we start coding. And, while we code, the IDE warns us about errors in an extremely short feedback cycle. We then use automation tests to get feedback on whether we break the code, and when the code is committed it goes through a build process which provides us with a feedback on how our code is integrated in the system.
After the code is released, we have a great monitoring system in Outbrain to monitor the features affect (on the users or on the server load, or any other KPI). Lastly, when things go wrong, we conduct a take-in to make sure we learn from our mistakes.

Looking at all of those great feedbacks, we can group those into two groups. The first group would be "Automatic Feedback" and the second would be "Human Feedback" as described in this picture:


Interesting observations on these two groups would be that the "Automatic Feedback" group exposes symptoms, whereas the "Human Feedback" group exposes problems.
This actually means that we start our coding process by asking feedback, and only after we're done, and the feature's out, we try to learn from our mistakes. During the time between these two human interactions, we're pretty much on our own. Adding a code-review cycle gives us another opportunity to get human feedback and improve our code before it's committed/deployed.

In my next post, I'll try to convince you that code review is great for the reviewer, the reviewee and of course - the organization.

To be continued...


Find me on Twitter: @AviEtzioni



More interesting posts from this blog:

No comments:

Post a Comment