What’s fair?

What makes a challenge fair is of course up for debate. We’re dealing with at least two definitions of fair here: fairness in what we ask of candidates and fairness in evaluation. We want our process to be fair to candidates and fair among candidates.

Here are the broad points I wanted to satisfy, informed by the Matasano hiring post and my own negative experiences with hiring challenges.

Objective scoring

A candidate should know exactly how he is being reviewed. This is probably the most important thing you can do to make your programming challenge fair. The benefits to you and the candidate are manifold:

Objective scoring forces you to think through how you will evaluate the submissions in a standardized way. This will not only save you and other reviewers a lot of time, it also tells the candidate a lot about your culture and values. For instance, do you put lots of weight on clean, well-architected code, or do you only score on getting the correct answer? Does the complexity of the implementation matter, even if it’s right? Does the candidate get extra points for including tests and documentation?

It helps a time-strapped candidate prioritize which parts of the solution work on.

Allows you to quickly build a distribution of scores which shows you the quality of candidates applying to the position. This can take some time depending on the variance of the responses but it will inform you when someone gives an exceptional response or if your challenge is too easy or too hard.

We spent a lot of time trying to come up with an objective scorecard that still allowed good differentiation between candidates. For reference, here is how we scored submissions.

Respect the candidates’ time

A broad, ill-defined task shows a lack of respect of the candidate’s time and reflects poorly on you as a hiring manager. If this is how you write your hiring tasks, how do you communicate tasks within the company?

A clearly defined, specific task is respectful to candidates of all experience levels. For the qualified candidate, she can immediately estimate how much time the task should take. For the unqualified candidate, she should be able to immediately see that she’s out of her depth. In both cases, the candidate can decide for him or herself whether the time investment is worth the risk.

Was our task well defined? Have a look and see for yourself!

Use blind review when possible

Inexact criteria like “culture fit” have long been suspected of being a more palatable modern incarnation of discrimination. Even if that weren’t true, you’d want to use blind review anyway since it’s a basic tenet of good experimental design. For the subjective parts of our scoring — code style and conventions — we used blind review.

Be realistic

Don’t have your candidates invert binary trees unless that’s the kind of stuff you do in your company, or you have so many candidates you can get away with it. Neither is the case for us. This is, after all, as much a chance for the candidates to review us as for us to review them — they wanted something as close as possible to what they might actually be doing on the job. For our part, we made a toy task that mocked the entire backend at the time of the posting.

Provide feedback

No matter how much you respect the candidates’ time, this is still a risky time investment for them. If you have more submissions than positions, someone is not going to get a job out of it. They deserve to know not only how they did in an objective sense, but also how they compared to the average candidate. In addition, specific feedback from the reviewers should be included if available.

To this end, as soon as we had more than two submissions, we included the mean and standard deviation of the scores across all candidates when responding with the individual candidate’s score.

Alert immediately

Give the candidate the feedback as soon as it’s available if you’re evaluating on a rolling basis, or at a fixed date after the deadline. The candidates have invested a lot of time and deserve to know where they stand.