This is the first post of a three posts about the relationship between Testing and Product Risks. In this post, I’m introducing the idea of focusing the conversations about testing on being related to types of risks over the common way of talking about testing in terms of types of testing.





Types of testing? Or types of risk?

When you are describing the testing that you are doing to someone unfamiliar, or even familiar, with the craft of software testing, do you talk about of the types of testing that you do? Or do you talk about types of risks that you test for? Both? Neither? Is there a difference?

There definitely is a difference. But it might not be obvious, so firstly, let me explain what I mean by each of these, and then I’ll explain why I feel it’s important and more appropriate to talk about types of risks instead of types of testing.





Types of testing

It’s fairly common to hear people talk about types of testing. Examples of this are: Functional Testing, Regression Testing, Performance Testing, Usability Testing, Accessibility Testing, Security Testing, Integration Testing, etc, etc, etc…

All of these types of testing are trying to describe the testing being done in relation to specific areas of concern. But if you think about it, all of these types of testing are really just describing testing that is specifically focused on testing types of product risks.

Functional Testing is testing that focuses on functional risks. Regression Testing is testing that focuses on the risks relating to the software regressing with changes. Integration Testing is testing that focuses on integration risks regarding the feature, component or some part of the software being worked on with other features, components or parts connected with it.

Things like “exploratory testing” or “scripted testing”, well, they’re approaches to testing, and things like “black box testing” or “white box testing” are testing techniques. So I don’t include these as “types of testing”.





Types of risk

Imagine yourself testing something. Think about an instance of a test – a test idea that you might have. What drives that test idea?

When testing software, our tests relate to some kind of product risk.

By “product risk”, I mean risks that specifically relate to the product, as opposed to business risks, people risks, project risks, or other categories of risk that lay beyond the product.

Whether your test relates to a pre-written script to check an expectation, or your test is an instance of your exploratory testing regarding a specific test idea, your test is related to a type of product risk. We put difficult data into a field to test for the risks of difficult data not being handled correctly. We simulate ten thousand people browsing the feature at the same time to test for user load related risks. We use tools and compare the software with Accessibility standards to test for the risks that the software is inaccessible and that meets the standards… A test relates to some kind of risk that we are testing for.

“XYZ Testing” is testing that focuses on the risks of “XYZ”. As I mentioned above when talking about types of testing, a type of testing is testing that focuses on a specific type of risk.

But here’s the catch…

Did you know that there are over 100+ different kinds of product risks? And some of these risks, we would never consider calling them a type of testing.

If I asked you to name as many types of testing as you could, you’d do very well to name 15-20. I’ve tried this with teams in various companies and within the community, and at a push people usually get to 15 different types of testing. But if I asked you to name different kinds of product risks, I know for sure that you’d name more. On asking the same groups of people that named 15 types of testing how many types of risk they could name, they typically listed around 50-60 before we ran out of time. They would have listed more.

Here’s a couple of examples:

if our context was that we were working on a mobile app, then a something we’d test for is how much our app runs down the battery. Have you ever heard of “battery consumption testing” as a type of testing? No… But this is a type of product risk that we should definitely investigate! Or another risk might be relating to changes in our mobile connectivity. Or another risk might be how our app integrates with specific mobile OS settings… Anybody heard people talk of these risks as a “type of testing”? There are lots of risks that we should test for, relating to mobile, that would never be spoken of as a “type of testing”.

Let’s take another common context – if we think about data (most software tends to use data in some way or another these days)… Well, there are around 20 different kinds of data risks. To name a few: data correctness, data amount (for a single data transaction), data transaction amount (from a multi-transactions load perspective), data type, data usage (where the data you’ve entered is used), data consistency, data creation, data reading, data updating/editing, data deletion, data error handling, data transaction error handling, data input method, etc, etc, etc… There are a lot of types of data risks. But take any of the ones I have mentioned. Have you ever heard anyone call any of them as a “type of testing”?

“Oh, hey! We need to do that data input method testing now!”. No. And if you have, it’s definitely not mainstream as a type of testing, and anyone who said it was talking about the testing for that specific product risk anyway.

“Oh, hey! We need to do that data input method testing now!”. No. And if you have, it’s definitely not mainstream as a type of testing, and anyone who said it was talking about the testing for that specific product risk anyway.

Benefits of talking about types of risks over types of testing

There are a few big benefits that you get immediately if you switch your language to talk about types of risks over talking about types of testing.

You move away from implicitly talking about testing phases – types of testing typically subconsciously force our thinking down a path of: “we need to do this type of testing, then do that type of testing, then do that other type of testing…”. It’s not lean to work that way. Imagine being in the position of thinking: “right, we’ve got this feature to test, and we are aware of all these potential risks, so I have these test ideas, and I can combine some ideas together into one rest and can cover off a few risks with this one test! Woohoo!”. You get better at telling your testing story – i.e. “this test was to investigate this risk. Here’s what I discovered about this risk. I need more time to test this feature as this risk is important to investigate”. You spot gaps in your testing more easily – i.e. “we tested for risks relating to lots of data being used in the transaction (i.e. data load risks), but that made me think about transaction load risks, so what if we had lots of transactions at one time?” It adds that little bit extra structure your test charters for your exploratory testing – i.e. if you are familiar with Elizabeth Hendrickson’s amazing test charter template: “Explore [target], with [resources], to discover [information]”, we could modify this slightly to add more structure to the information that we are trying to discover: Explore [target], with [resources], to discover [information about specific risks]”. It helps grow your lateral and critical thinking skills relating to how to test for a specific risk – i.e. there is always more than one solution, so if you know of a risk that you need to test for you can start to think of many different possible ways that you can test to discover whether the risk is actually a problem or not. You’ll also get better at discovering more risks that you might not have thought about before. You’ll certainly be more likely to ask the question: “What risks have we not thought about yet?”

Watch out for some traps!

There are a few traps that you should beware of too.

Exploratory Testing and Automation are not types of testing. They are approaches to testing. Some risks won’t matter, so you should always ask “is this risk important?” Before investing time in testing for it if the answer is no. We can’t test everything… Prioritising our tests is essential. Testing isn’t just about testing the risks. For example, initial testing that we do might be to uncover risks. There are different categories of risks too. It’s common for people to confuse business risks with product risks or project risks or people risks. Some risks might relate to other risks across the categories. For example, when testing, it doesn’t make sense if you say “I need to test for the risk that the business loses some customers due to bad reviews”, unless you are specifically testing the business’ strategy and goals. It makes more sense to think about product risks when talking about testing.

In the next post of this three part series, I’ll explain how I use an investigative testing approach to uncover risks, and will share a model and an example to help visually communicate my thinking.