Yes, you read that right! In this article, we are going to present the insights we have gathered from looking at more than 4500 code reviews.

And the best part — the reviews were made by more than one hundred expert developers spread around the globe and having 1 common goal: review the coding skills of other developers!

But first, let us put these numbers into context:

Indorse is a Skills Assessment Platform, where a panel of experts coming from all over the world review candidates on their software development skills. These code reviews are done either on candidates’ existing open source projects or on a particular assignment that our clients ask the candidates to undergo.

Why should you care?

If you are a Talent Acquisition (TA) leader or a manager responsible for tech recruitment in your company, this article will give you data-driven insights on how software developers are typically judged by their peers.

Methodology involved

The reviews were done across 5 main parameters, and we calculated a cumulative score across the parameters. In the following sections we will first lay out each of the parameters, and then present the overall “Normalized Score”.

During a coding skills evaluation, our experts rate candidates according to 5 criteria, and on a scale from 0 to 5, the latter being the highest:

Code Quality

Extensibility

Readability

Knowledge of Design Patterns

Test Coverage

Each candidate is reviewed by 6 to 10 experts depending on our client’s needs. Thereafter, we get a weighted average of all these criteria and a normalized score on a scale of 1–5.

Let’s dive into the parameters one by one!