Problem: Our non-tester colleagues tend to overestimate the importance of checking in a testing performance.

Why this post?

I have had to counter statements and field questions like:

1. It’s not in the requirements!

2. When can we release?

3. Have you fully tested the product?

4. Why don’t you just automate everything?

These questions/misconceptions come from people overestimating the importance of checking in a testing performance. I thought long and hard about how to clarify these misconceptions in a relatable way. I realized that my colleagues think like testers when interviewing candidates. Developing on this common ground, I want to attack this bias of favoring checking.

Testing and interviewing

Here are a few places where you can compare good testing to good interviewing:

1. It’s not in the requirements!

When you interview a candidate, do you stick exactly to the written job requirements? Do you even refer to the written job requirements? You probably have a gut understanding of the kind of colleague you want, the culture in your company and the skill set needed to do the job. Testers are like that too. Software requirements are helpful but we do not treat them as necessary, complete or written in stone. Also when interviewing, there is a bias towards gathering information as opposed to making an immediate decision. You are not out to ‘break’ the candidate. Just learn more about them and how they behave in different conditions. You do not limit yourself to what is on their resume or what was listed on your job description. You Google for them, look up their GitHub profiles and read their blog posts. You want to know if they are interesting colleagues! You imagine how it is to work with them. Testers are like that with software. We want to gather as much information about the software as possible. We consider explicit, implicit and latent requirements. So the next time a tester reports a bug that is not in the written requirements – take a moment to appreciate the good testing and thank the tester for spotting a crucial detail!

2. When can we release?

Another common misconception at a lot of organizations. If I have enough prior experience interviewing candidates for a specific position, I can tell you when I am going to finish interviewing a particular candidate. I cannot tell you when I am going to fill the vacancy. If this candidate fails the interview, then guess what? We need to interview a new one. Testing is similar. If I have prior experience testing a particular product or feature, I may be able to estimate how long it may take to test a build. But I cannot tell you when we are going to release.

3. Have you fully tested the product?

Have you ever “fully interviewed” a candidate? Based on your context, you probe the candidate on topics that you think are the most important. Testing is similar. Based on our context, we test the software on areas and functionality that we think are important.

4. Why don’t you just automate everything?

Have you tried listing every possible question you can ask in an interview? Have you tried hiring using only a written test? Probably not because you accept that interviews are an unpredictable dialogue between you and the candidate. Sure, you begin with a script. But you quickly adapt based on the response you receive. You realize that you can guess a few ways in which a candidate is going to mess up an answer but it is impossible to know all ways in which a candidate could mess up an answer. This is the reason why no two interviews are identical. Going in you do not know what nugget of information you are going to discover. Post fact you do. You also understand that a binary pass/fail is not enough to evaluate the responses. You need to exercise your judgement to evaluate the responses and decide on the next question. Like interviewing, testing a mixture of scripting, thinking, reacting and judging. You cannot script everything.

References

This analogy has been used before but (based on text search) references to it online has been sparse.

1. The prolific and thought provoking Michael Bolton, has used this analogy before in both a post and a tweet:

Excellent exploratory testing is like interviewing a program. Imagine that you work at a placement agency, linking candidate workers to your clients. One of your key tasks is to qualify your candidates before you send them out for jobs or for interviews. To make sure they’ll be ready for whatever your clients might throw at them, you test them through an interview of your own. You can plan for that interview by all means, but what happens during the interview is to some degree unpredictable, because for each question, the answer that you get informs decisions about the next question.

–Michael Bolton

NOTE: Yes, ‘excellent exploratory testing’ is the closer parallel. Comparatively, my analogy is a stretch and does not capture a lot of what testing really is. But my goals are different – I am using this analogy specifically to appeal to non-testers to think deeper about testing.

2. A teacher/researcher used this analogy in a different context. As a culture geek, I found this entire paper fascinating. It is written by a teacher/researcher who spent an entire career (45 years!) examining ritualized routines and submerged cultural understandings in the field of teaching and learning. Relevant part to this blog post is in the section: DISCOVERING HOW STUDENTS USE THEIR KNOWLEDGE TO ANSWER QUESTIONS ON TESTS.

3. I found this comment by a non professional tester (CEO of GarageGames):

To me, testing is similar to interviewing. If you don’t have some standard to compare with, you have no way of gauging if you current release is better than the last.

–Eric Preisz

If you made this analogy work for you, or hopefully have a better approach, please comment below.