Bugs – we all hate them

One of your team’s features has a bug, again…

You either can’t learn what you hoped to from this feature (is the feature working as we expected it to work? Do users like our new feature?), or does it cause a bad user experience?

Best case it’s annoying, worst case it costs you time and money. Now you have to navigate rough waters with customers, and fix the issue instead of moving forward to other features.

If only we could minimize the amount of issues and make sure they don’t return. It’ll free your time for other priorities.

You’ll always have bugs, that’s inevitable, but you have to place safeguards that ensure you ship high quality products, and don’t waste your time in the doldrums of fixing and maintaining buggy-code.

Tests are automated-guardian-sentinels of your time, they protect your product from bugs and regression (old shit breaking). They allow you to outsource the responsibility of your software stability to a machine, which is almost always the right move.

But tests are the software equivalent of flossing – everyone knows it’s important, no one wants to do it, or they don’t know how to do it.

Why is a Product Manager writing about tests?

For me the main challenge was to get familiar with all the different test types, how they play with one another, and the responsibility between developers and product managers. There’s a pile of different test types, and in each organization they are given different names, or even worse, same name, but different meaning…

As a Product Manager, you should define the flows to be covered. It’s up to the developer to decide which test type to use. Having said that, you should know the different test types, as they have different complexity levels, and bring different value. You should know them in order to have a discussion.

Below is a bird’s eye view of the different test types we have at Soluto, when and how to use them.

*we know our definitions aren’t close to the official definition, but over time it became our common terminology, and at this stage we believe it’s better to roll with what we have instead of investing time and effort into correcting the terminology.

Unit – The purpose is to make sure that a unit behaves the way you expect it.

First layer and most basic level of testing. Tests the smallest testable part of a software (method, class etc.).

Mainly used by developers, less involvement of Product, but it’s worth to ask about the amount and type of coverage we have. Do we have too many or too few tests? Can we get better results, with less effort by using other test types?

Example – Certain function should always return a number within a range (let’s say between 0-100), if the functions returns anything outside this range, the test should fail..

Black box – Closed environment test for one component (e.g. microservice). All external dependencies are faked. We define the inputs that go into the code, and know what the output should be.

Don’t care what happens inside as long as we get the correct answer.

Mainly used by developers, less involvement of Product.

Example – If you type in a calculator 2+2, you don’t care what happens behind the scenes, you only care about getting the correct answer (4), the process doesn’t matter.

Smoke test – Is a very basic functionality test of the build. The term originates from electronic engineering, after building a new device, the first test was the most basic: Plug it in, turn it on, does smoke come out? If the answer is yes, that’s not a good sign…

The purpose of a smoke test is to “turn it on and go through basics”. This type of test is meant to verify that a code is ready for additional testing. If the smoke test fails, don’t bother moving on to the rest of the test plan.

Should be a quick and basic test that fails fast, goal is to find major issues, or make sure the happy flow works. Think of it as a “light E2E” test.

Example – Open the app → Click call → See a call goes out. Don’t check edge cases, analytics, logs or other animals.

If you decide to use this test, it should be defined by the Product. What must work? What’s the happy flow?

End-to-End (E2E) – Used to test an entire flow of your product (onboarding, purchase, etc.), from start to finish. The goal is to mimic user (or “real-life”) behavior as much as possible (happy, unhappy and edge case flows). The purpose of carrying out end-to-end tests is to identify system dependencies and to ensure that the right information is passed between various system components and systems.

Must be defined by Product.

Shit Bugs will happen

Before fixing a defect, consider writing a test that exposes the defect. Why?

You will later be able to catch the defect if you do not fix it properly. Your test suite is now more comprehensive. You will most probably be too lazy to write the test after you have already fixed the defect.

Tests are a valuable tool, I know they are annoying and you’d rather move on to the next feature, building stuff is way more fun than maintenance, but in the long run this time and effort investment, will pay dividends.

Found this interesting? Share!