At Netguru, we keep everything on a master branch and deploy as often as we can. Our code review process works on commit by commit basis, and you have to get your code reviewed by somebody, or else your build will fail.



Note: contrary to suggestions in some comments on reddit, the first part of this blog post is not something we deal with in our everyday life at netguru. Part 1 is just a set of observations we have made, when joining projects with other development teams on our clients projects. Our default flow is described below - it's very different from FB / PR flows, so we try not to enforce it - when joining a project you want to have as little takeover traction as possible. Another blog post on that is on it's way - but it's safe to say we can follow what's described here in 9 out of 10 projects we do. Right now netguru team is 50+ in size, average project team size is 2 (currently the maximum is 4). We have seen this flow working well last year on the project team that varied from 10 to 14 people.

Single branch flow, distributed and mandatory code review.

This is what our review looks like:

The grey labels are JIRA tickets to give you more context. You click the eye icon to open the commit preview on github, and use the dropdown menu to change commit state.

The rules are simple:

Push everything to master.

Every commit pushed is ‘pending’ for review.

Pending commits get auto-rejected in 2 business days.

Another party has to accept or reject your commits; you can’t do it by yourself.

Rejected commits stop the build (The review state is checked for every build through an API and the CI build fails if response['rejected'] > 0).

You can ‘pass’ a commit (this gives the author an extra 2 days to fix it).

This means you can push your code instantly to production if you need too, but in less than 2 days somebody will have to take a look at it.

Some insights from Wiktor on this flow can be found here: strict peer code review

This process is very strict, but developers enjoy it.

Developers get very direct feedback on work, no bullshit.

There are no consequences for making small mistakes along on the way, other than the fact that they must be fixed.

Bigger mistakes or failures happen very rarely, as people are very concious of the fact their code is going to make it to production very fast.

The feedback and course corrections are fast too (the feedback loop is short, always shorter within 2 days, so devs don’t come back to work on something finished over a week ago).

There is always a second pair of eyes looking through the code.

It’s easy to find a reviewer because he also has some commits to review and you can trade. We have a special marketplace room on Hipchat for trading commits to review, called ‘tradeguru’ The review process itself does not stop the overall flow. We use tests (both automated and manual) to ensure that a feature is working correctly. This makes caring about the quality of the code asynchronous from deploying the project and showing off features to our clients.

Apart from this review, inside project team developers often use compare view to see what's going to be deployed, just for the final check. Since we deploy often (ideally at least once a day) the scope should not ever be too big to comprehend.

Growing your team, culture and knowledge

In this kind of workflow where everybody trades with everybody, you can see how different and great the possibilities are:

A senior dev will quickly pass or reject failures made by a junior dev. This makes learning at Netguru extremely fast.

A junior reviewing a seniors code will pass commits they don’t understand, and 'force' the senior to explain the code. Fast learning again.

Nevermind the developer experience, you get to see the ‘tricks’ of other developers and get to share your own solutions. The information flows across multiple people and projects - the team’s overall knowledge increases.

We have even started doing internal review webinars over Google Hangouts, to show the newcomers how picky can you get in your review. This is speeds up the knowledge flow even more.

For Netguru as a company, this also means that we were able to grow from 5 to 50, and we are not scared of losing our project quality when we think of doubling our size. Core rules of this flow (single branch, commit by commit review, failed builds on rejected commits) were established in our company 4 years ago - and worked out great for us.

Feature flags

To make sure bigger features or refactorings don't get in our way, we use feature flags in our code. Usally it's as simple as a features.yml config file, but you can get sophisticated with tools like flip to mark your work as not ready.

There are different approaches to this subject, and extrimist can always disect the whole project into smaller feature apps. The general approach we always want to remember and follow is:

"Feature Branching is a poor man's modular architecture" - Dan Bodart

Read more about it in blog post by Martin Flower

Main values

To wrap this up, what we value the most is:

Quality without losing the pace.

A culture with direct feedback, full visibility and while nurturing and growing team knowledge.

A never ending, very fast feedback loop.

We often feel that pull requests with features are like handbrakes for those values. It feels horrible when you have to wait for someone’s acceptance before your code gets to master and others can use it. This implies lack of trust in your work. We trust each other, we review to get a second coder’s eyes and correct course, without ever halting our workflow.

Make sure to read first part of the post too - Ups & Downs of Pull Requests Flow Part I (pull request flow traps, sad stories).

PS. Special thanks goes to folks from DRUG who helped me to wrap my head around the topic and finish this articles on their DRUG SC meeting