Lots of bigger companies have written tomes about how they hire software engineers. A lot of what I do around here is keep an ear to the ground on what other engineering teams are doing, with the idea of seeing what makes sense for us to experiment with on our own team.

Essentially, if you are a company with name recognition, a strong technical story, and clear market fit, there is a ton of advice out there that you can probably just roll into a playbook that you, along with a seasoned recruiter, can execute. What seems to be missing is a guide for teams that are actively pursuing those characteristics, and simultaneously want to recruit talented humans to help their teams achieve their mission.

My goal is that if you are looking to grow an engineering team from 5 to 10, or from 10 to 20, that this may serve as your playbook.

Our (Current) Ideal Technical Interview

I say “current” for two main reasons. The first is that like many high-growth startups, iterating is baked into our DNA. We constantly evaluate everything about our processes from engineering features, to deploying software, to our process for recruiting software engineers. Additionally, we currently have a small engineering team. The interview we do at 20 will certainly have some differences from the one that we do today, but at the same time, there are certain things that we will never change, such as ensuring that we hire engineers who share the company’s core values.

Today our ideal technical interview has the following steps:

Informal Technical Discussion

The idea behind this 30 minute phone call is to essentially learn more about who the candidate is as an engineer. This conversation is super relaxed, and does not involve any sort of harsh Q&A, and we do not test for any sort of esoteric technical knowledge the candidate might have at this stage. What we do focus on during this time is:

What Dia&Co. does as a company, and why he or she would want to work here.

What problems the company has that the candidate might expect to solve, and what the candidate might expect to work on.

The candidate’s thoughts on technology in general, but also related to what our team uses and what the candidate is familiar with.

Interesting problems the candidate has solved in the past and what excites them in particular about Dia&Co.

Any questions the candidate has about the company.

This interview is one of the best ways to get a sense of a candidate as a person. Things we tend to look for at this stage include getting a feel for their personality, curiosity, thoughtfulness, and higher-level decision-making ability.

Homework Assignment

Provided the initial phone screen goes well, we then send candidates a homework assignment. We don’t do any whiteboard coding during the onsite interview, so the homework assignment is one of the best ways we’ve found to get an indicator of the candidate’s skill levels, as well as to get a sense for how they would write code of moderate complexity day to day.

Some of the core skills we are testing here include making sure the candidate is able to write code that is well tested, well composed, readable, performant, and maintainable.

The current homework assignment that we send to most of our candidates is more complex than implementing one function, but not too complex that it requires a huge time commitment. Our company is fundamentally based on respect for individuals, and asking for more than four hours of someone’s time for this assignment is both unnecessary, and quite honestly, disrespectful of their time. It does however, provide us with a far superior signal for where a given candidate’s technical skills lie, and allows us to avoid the inferior method of trying to test for these skills by asking a candidate to implement binary search on a whiteboard.

It’s important that both small and large startups alike begin to realize that whiteboard coding tends to select for candidates who have experience coding on whiteboards.

Since this is not a skill that we ask engineers to perform day to day at our company, we do not ask them to perform it in our interview. These types of tasks amount to asking an engineer to do something that they don’t normally do, with the added pressure of a job being on the line. It creates a hostile power dynamic that is not how I’d prefer to start off a relationship with another human that I am going to be working closely with.

Onsite Interview

Provided we feel that we have enough certainty that the candidate has the right technical skills, and that those skills are developed to the right level of mastery, we will invite them onsite to meet with the entire engineering team, as well as possibly with members of our Product, Design, or Data teams, depending on the candidate and the role.

A major aspect of the onsite interview is pair programming. We do not pair program with any regularity in our day to day work, however it is the best way we have found to see if a candidate can really code, and can do so as a member of our team specifically. We do this for every candidate, regardless of role or experience level. The actual exercise or problem that we solve together may vary based on those, or other factors though. All in all, the pair programming is broken into two separate sessions with two different engineers, and those sessions last 45 minutes each. We always leave time for the candidate to ask the interviewer any questions he or she may have, including in these pair programming sessions.

We also have the candidate do a systems design problem, usually with our CTO. Perhaps now is where I should also mention that we have nothing personally against whiteboards in general; rather the types of questions that are usually associated with whiteboards are what we find problematic. A lot of candidates find it to be the easiest way to communicate their thinking and to show the system they are designing during this session.

The rest of the in-person interviews are geared towards being able to meet each of the members of the team that the candidate would be working most closely with on a day to day basis. One of these interviews is to test for cultural fit, which is specifically defined as whether or not the candidate shares the core values of the company. It is specifically not defined in any of the other ways in which that term has become somewhat of a loaded one. In fact, our core values necessarily prohibit that from being that case.

Open Issues

Like any new, fast-moving company, nothing that we do is ever really “done”. We started to formalize this process back in August 2016 because at that point we felt that we had more questions than answers when it came to choosing future teammates to work with. Today I feel that we have some of those questions answered, but here is just a sample of some that remain:

Eliminating Bias

There’s no doubt about it, cognitive bias exists whether you realize it or not, and it exists even if you actively implement steps to remove it. Part of the challenge of a smaller team versus a larger team, is that certain strategies that would reduce bias are not always possible to implement with a smaller team. A perfect example of this is the decision to bring a candidate to the onsite interview. Since the engineer that conducts the informal technical conversation is often the same engineer that grades the homework, it would be really hard to fail someone at the homework assignment stage that you had a really good conversation with over the phone, regardless of how they performed on the homework assignment. We have written a rubric such that anyone should be able to consistently grade homework assignments, which in theory should help reduce bias. If we had a larger team, homework assignment review could be anonymous, such that the reviewer would not know the identity of the candidate they are grading. We would also have more bandwidth to randomly assign homework assignment review, which would theoretically normalize the distribution of reviews in which the reviewer was having a bad day, as one example.

The bottom line is that we are aware of bias, we are actively working to eliminate it, or to reduce its effects, but to a certain extent, every company has to learn this from their own experience, in their own time. A good starting point is the insanely amazing “Cognitive bias cheat sheet”, by Buster Benson, as well as “Managing Unconscious Bias”, by Facebook. Managing bias is really about managing your thinking, so it should be an incredibly interesting topic for product engineers even outside of the scope of hiring.

Structure vs. Fluidity

Another open question is how much structure is the right amount of structure to include in our process. On the one hand, you want structure so that you can compare like items. Also, being an interviewer is a skill. If your interview process has a structure to it, it becomes something that you can train people on, making them more skilled interviewers. Having more skilled interviewers is a massive asset to a company, because interviewers with higher skill give you a really favorable signal to noise ratio for making hiring decisions. Noise is probably the most insidious problem when a new team first starts to kick the tires in their first few months building a hiring operation.

On the other hand, as much as I and other engineers probably wish this wasn’t true sometimes, humans are not operating systems, and they cannot be made to behave like them. If you don’t believe me, Google “holacracy”. So we spend a lot of time thinking about how much process and planning is too much, and are try very hard to strike the right balance at the right time with the resources that we have.

One area of future work here might include sitting down as a team and defining what a successful contributor looks like after 90 days. This could break down in terms of technical skills and personality traits. Ideally, these would be mutually exclusive, but collectively exhaustive. We could also determine any traits, skills, or accomplishments that would really separate a superstar contributor from an average contributor in 6 to 12 months. We could then design a bank of interview questions and pair programming problems that are specifically designed to test for the presence of these attributes, define the criteria for a poor answer, a good answer, and an excellent answer, and do our best to ensure that they did not induce bias. This is an experiment that we might pursue in the very near term as our resources expand along with our constantly refining understanding of our team’s purpose within the context of the larger business.

As you can see, we are working really hard to build an amazing team of engineers here at Dia&Co. and our most interesting challenges are ahead of us. If they sound exciting to you, I’d love to chat with you. Hop on over to our careers page!

P.S. I would be remiss if I did not mention how fortunate and grateful we are to be helped in this endeavor by a truly amazing People & Talent team. Their work in coordinating interviews, managing relationships, and driving strategy across our talent pipeline provide quantifiably better outcomes for us, and ultimately for our customer.