As a testing community I think we can all agree that, in practice, you can never test everything with every possible scenario.

In theory if you had infinite time then maybe you could. Obviously we don’t have infinity, so we need to know when we’re ‘Done’ as a way to know when to stop. To be clear, for the purpose of this blog, I’m talking about testing in the context of a manual test environment. In the real world one could argue that testing never stops even once software is in a live production environment.

I suppose the question I’m really asking here is ‘Where do you draw the line?’ I’m not sure any two testers would draw it in the same place if they were individually asked to ‘test’ something. One tester might spend more time looking at security and scalability whereas another may put more emphasis into looking at functionality and user experience. If a tester is not considering both of these areas, amongst many many others, before they begin testing, then they are a potentially flawed tester.

When should you Stop Testing? is a vague subjective question without more context.

There are several factors to consider when answering this question depending on the situation.

Some considerations might include:

Are you testing a minimum viable product (MVP) or a polished fully fledged feature?

Are you limited by a strict deadline? (I would stick to the riskiest areas first if time was restricting my ability to test everything I want to)

What industry do you work in? (you may be ‘forced’ to test certain things in certain ways)

How much of what you’re testing is covered by automated tests?

In my team we have an ‘In scope’ and ‘Out of scope’ section in our story template which we agree upon when we kick off our stories. The Product Owner, Developers and tester(s) are all party to the conversation so important areas to consider have less chance of being missed. Sometimes, after the kick off, I will create a mind map for test ideas and share that with the team. This can often lead to a clearer test plan in terms of clearing up assumptions and can lead to a more accurate scope list.

When I test, the first thing I do is to read through the acceptance criteria (AC). Even though some, if not all, of the AC may have unit or integration tests I never quite feel comfortable in assuming these areas are fully covered until I test them manually and see the results for myself. This is no reflection on my confidence in the developers but more a lack of trust in automated tests and understanding that each test only tends to cover one specific unchanging scenario. I will then move onto the In Scope section written in the kick off. The In Scope section is a very good starting point for testing. However it could be restrictive and potentially dangerous if you treat it as a black and white rule, especially if you plan to do some exploratory testing (ET) later, which I highly recommend.

Finally I would run some ET, where necessary (several ET charters may be created but not necessarily run) The very nature of ET is to have a charter to make sure you don’t stray too far from the area you want to test. If it would cause you to stray too far from the scope of your ET charter then simply create a separate charter for each new area of discovery. I wouldn’t say that you should ignore a particular area of curiosity just because it’s not stated as In Scope, or even if it’s on the Out of Scope list. You may also come up with more test ideas to try out that you had not thought of during the story kick off.

As I’ve said there is no black and white answer to the question of when to stop testing. It’s down to the tester and supporting team to collaborate on what will and will not be tested. The important thing is that everyone is aware and comfortable with the results. Hopefully this has given you some food for thought with regard to things to consider before you can confidently say ‘I’m Done!’

Share this: Twitter

Facebook

Like this: Like Loading... Related