System design interviews assess your ability to discuss engineering tradeoffs while diagramming a system that powers an example application or feature of an application. The interview complements traditional coding interviews by focusing on the macro issues of software and the issues/situations that influence the technology choices we make.

I believe that system design interviews are the most challenging interviews to give (and take) — mainly due to how fairly open-ended the problems are and how amorphous the interview is within the industry.

I’ve always had shadows ask me for guidance when giving these interviews, so I wanted to jot down some difficult areas of the interview that I think about. I’ll focus on the side of the interviewer, but there’s helpful information to glean for folks taking this type of interview.

Selecting a good problem

Some problems require eating time to build up the requirements (i.e., get on the same page about what the system should do). That obviously reduces the time you can spend on discussing solutions — which may work against your ability to get meaningful signals for your assessment.

You want to be able to ask “why?” many times throughout the interview. That question opens up a discussion of tradeoffs.

Other problems have already known requirements (e.g., build a Twitter clone), and can quickly get to features that showcase the heart of design tradeoffs. This limits what you can ask in a way — since you’re banking on using popular apps that the candidate has used. You also run the risk of the interviewee being over-prepared for this — which means you have to work harder to see the bounds of the candidate’s comfort zone.

You may want to optimize for a person’s specialty (focusing on the frontend or backend) or test breadth with a full-stack focused problem. I particularly like doing full-stack interviews even for frontend candidates to see how comfortable they are with backend systems (but my expectations are set accordingly). Having no familiarity with backend tech/architecture might hint at having less experience or a throw-it-over-the-wall mentality (which is a useful signal for follow up questions or future coaching needs).

Having a plan of attack

Do you start with an ambiguous opener to gauge how the candidate deals with ambiguity and gathers requirements? e.g., Start with “let’s build a Twitter clone” and then stay quiet to see where they begin.

Do you ask for the tech stack they’d use? If so, when do you ask? How do they react if you ask very early on (as in, do they force gathering requirements first, or do they lean on tech that they know)? This is a good opportunity to be able to ask “why?”

You should know the features you’d like to build (balanced for difficulty and time to solve), situations that scale up those features, and some baseline solution as a fallback in case the interviewee is horribly stuck.

Do you start with a single-user application and scale up to thousands/millions of users? Or do you jump right into big scale?

If you start with a single-user app (small scale), does the candidate choose frameworks or go with quick-and-dirty tech? Their ability to justify their decisions is key. There’s no “right” answer here, only reasons why the choice was made.

Your performance as an interviewer

Keep the interviewee calm, engaged, and making progress. I find that being friendly, upbeat, and reassuring are necessary for keeping the candidate’s nerves at bay.

Similarly, if the interview is nosediving despite your best efforts, you need to keep your emotions at bay and avoid showing signs of frustration, boredom, or discouragement.

Perceived collaboration: I like standing up with the interviewee at the whiteboard to make it feel like we’re working on the problem together. The danger here is getting too involved and talking too much. Sitting down the entire time that the interviewee is drawing makes the interview feel impersonal and makes it obvious that you’re watching/judging them.

Reinforce that there is no right answer. Stating this up front is helpful (especially for backend systems) since there are common patterns (horizontal scale, caching, etc) for scaling subsystems.

Know when to keep quiet and when to chime in. You want to give the candidate time to think, but you need to sense if they’re quiet due to being stuck or mentally spinning in circles.

Tastefully identify holes and issues with the presented solution. Telling the candidate outright that there’s a bug or fault, risks the dynamic snowballing: the candidate may second-guess everything after that and spin in circles. Instead, present a situation of increased scale (or some counter-example) and ask how the solution would behave and how one might adapt to the discovered faults.

Keeping up: As the interviewer, you need to follow along with the candidate’s solution. This gets easier over time, but you still need to be engaged to understand the architecture and thought process behind the solution. Keeping up will allow you to find issues, areas with which to dig deeper, and to sense implicit assumptions that should be clarified. What’s also helpful is that if you’re not following along, you can gauge how patient the candidate is when repeating him/herself. This is a signal for how they’d interact with teammates.

If you have someone shadowing your interview, you need to set the limits of when/how they chime in. Sync’ing ahead of the interview is necessary to get on the same page. Even if there are mistakes made from the shadow (which is a good learning opportunity), it’s beneficial to review them by sync’ing after the interview.

Gauging both breadth and depth

As you go broad, dip into the details. The interview itself should focus on a high level architecture, but it’s useful to dip into more low-level areas (like database schemas, memory usage, UI interactions, etc) to see what the candidate knows and if/how they struggle. It’s easy to get carried away with the micro discussions though, so be sure to pop back into the macro. Leave the focus on micro problems to the coding interviews.

Does the candidate identify concerns with high-level considerations like performance, offline mode, security vulnerabilities, technology choice. If you have to bring up those areas, can they speak to any/all of them with some level of confidence?

Do they have any textbook knowledge on how to scale web services? There’s a lot of material out there (books, articles, and videos), so having some textbook knowledge highlights curiosity or previous experience.

Do they have any prior experience scaling applications? This is an important signal for positions that require more experience. You can gauge this through the ease/confidence with which they handle scaling issues of the application.

Having objective success criteria

Were you aware of any biases that you had that could impact your assessment? Having taken unconscious bias training is helpful here.

Were there any behaviors from the candidate that made you uncomfortable?

Was the candidate overly nervous despite your efforts to calm them down? If so, which parts of your assessment would that have affected most?

Did you do anything that may have adversely affected the dynamic of the interview? If so, how can you adjust your assessment without lowering the bar?

Do you have a gauge on the candidate’s collaboration and problem solving skills based on how they answered your questions during the interview?

What is your baseline expectation for the required seniority of the role they’re applying for? Do you have an objective understanding of seniority levels?

Did they bring up various solutions to the subproblems along the way? Did they need a lot of help and handholding?

Were they able to justify their choices/decisions? Or were they just guessing the whole time?

What would you have wanted to ask or dig deeper into if you had more time? Maybe another interviewer has an answer to that question.

This isn’t a complete set of considerations by any means, but hopefully enough to guide your self-reflection and iteration on the system design interview.