Automated testing will set your engineering team free

“Automation frees engineers to focus on the things that matter and unleash their creativity”

Azimo’s mobile apps are used to move hundreds of millions of dollars in remittances to more than 190 countries every year. Around 10,000 variables affect our testing process, including localisation in eight languages.

Despite this size and complexity, our team of just six engineers releases iOS and Android apps to production every week, with 99.5% of users enjoying an uninterrupted, crash-free experience. To achieve this, we had to think very hard about how to test our code and our user experience.

Our goal is to keep more than 99.5% crash-free users all the time

When we opened our internship program, we received almost 100 applications and talked to dozens of candidates. We were surprised to learn that very few had much experience testing code as part of their education or training. As any experienced software engineer knows, testing is your bread and butter.

Why do we test our code?

Many small teams and young companies cut corners on testing. When senior management sees a working product, it’s often difficult to explain the importance of testing coverage when you could be running off to build the next cool feature.

The reality is that as you build more features, your product becomes more complex. As your product becomes more complex, it becomes harder to test. As your product becomes harder to test, crashes become more frequent. You can see the problem.

That’s why at Azimo we created a rule: key implementations are only tested manually once. If a feature is important for our product, it must be tested automatically. Apart from ensuring that no product value is lost over time, we invested in automated testing for the following reasons:

Testing can become boring . When you’ve just built a new feature, tapping every button or field can be a pleasure — but it’s unlikely to be a pleasure after the 100th or 1000th time. We use automation to reduce repetition and keep our team fresh and motivated

. When you’ve just built a new feature, tapping every button or field can be a pleasure — but it’s unlikely to be a pleasure after the 100th or 1000th time. We use automation to reduce repetition and keep our team fresh and motivated We build tests to save our time. It’s easy to launch a new instance of the app to test the login screen. But how would you test a popup that is presented to a user only under complex conditions that must be recreated manually?

It’s easy to launch a new instance of the app to test the login screen. But how would you test a popup that is presented to a user only under complex conditions that must be recreated manually? We build tests so that we can focus on things that matter. At Azimo, 80% of our testing time is devoted to automation. The remaining time is devoted to testing the hardest cases manually, so that we don’t repeat Elon’s over-automation issues. 🙂

At Azimo, 80% of our testing time is devoted to automation. The remaining time is devoted to testing the hardest cases manually, so that we don’t repeat Elon’s over-automation issues. 🙂 We build tests to make our code clean. Building automated tests requires a modularised, loosely-coupled project structure. You need to write code that makes any dependency easy to replace and any data easy to mock. This makes your code easy to test and maintain by multiple engineers working in parallel.

How to test

There are many different ways to test a big project. At Azimo we use a variety of approaches, depending on the situation:

Unit testing

The most basic method of testing your code. Unit tests are most helpful in parts of the code that affect business logic. They verify that algorithms work correctly, identify edge cases and ensure graceful failure.

We use a basic toolset to build unit tests: JUnit (default testing framework), Mockito (passing fake objects into tested classes), Jacoco (test coverage report). Our code is written in MVP (Model-View-Presenter) architecture, and uses Dependency Injection pattern (Dagger 2).

Sometimes we use Robolectric to test code that interacts with Android classes. However, we don’t test the Android SDK itself. Instead, we prefer to build proper abstraction separating any kind of logic from activities, services and any third party libraries.

Integration tests

While unit tests focus mostly on plain Java code, we often need to test a whole component like an application screen or background service.

In this situation we don’t build detailed, Robolectric-powered unit tests that check every single behaviour (is this button disabled under given condition? Is the text set correctly?). Instead, QA and software engineers build a controlled environment for the tested component and then validate whether it behaves in the way we expect.

Our transaction status screen, for example, has many different UI states. Instead of preparing dozens of tests, we prepare API or Database mocks for every state and provide them via DaggerMock to screen dependencies. Then Android Instrumentation tests, together with Espresso, do assertions on every single screen state.

How do you test an app screen with dozens of different states?

End-to-end testing

End-to-end testing is the final stage of our QA process. When we finish working on a new feature or app release, we perform end-to-end testing to simulate the experience of a real user.

We use similar tools for integration and end-to-end testing (DaggerMock, Android Instrumentation Tests, Espresso), but this time there is no hermetic environment for every app component. Instead, we assume that each place in the app has to be reached in the same way that a user would do it — by clicking on the interface.

The price for end-to-end testing is time — the most complex scenarios can take up to a couple of minutes to test. But in return we cover real use cases (features, navigation flows, third party libraries and more), connect to the real API and work on real data.

The process

Over the years we’ve learned to manage a relatively small team and maintain very high standards despite a complex product. Here are some of the basic rules and processes that we’ve found helpful as we’ve grown into a global business:

Unit tests

Written by software engineers as an integral part of a code

Code coverage of up to 60% and no lower than 50%. It’s better to deliver code to QA and build integration/end-to-end tests than to reach 80 or 90% coverage with time-consuming unit tests that must be maintained in future

While running all unit tests takes about 1 minute (more than 2000 tests), all tests for a single class must be launchable in seconds

100% of tests must pass before the code is released to the QA process

Integration tests

Written by QA engineers, supported by software engineers. While QA decides on testing scenarios and the codebase, the developer must expose modules or single dependencies to make mocking easy and effective

We prefer to measure how many features are covered by tests rather than how many lines of code

Use a hermetic environment (usually up to 3 screens tested at once), with mocked data (usually hardcoded JSON instead of API calls or hardcoded database model). While mocking gives us full control of the recent state of a screen, it also removes all latencies like API calls

End-to-end tests

All tests should take no more than three hours. This limit gives us enough time to test and release in a single day. We also have enough time to do final fixes, rerun failing tests or do manual checks for them

Cover all product features. Our time limit means we have to prioritise what should be tested first and most thoroughly:

- Priority 1: Auth , because you can’t do much without logging in

- Priority 2: Recipient , because you need someone to send money to

- Priority 3: Transaction , because this is our primary business metric

- Priority 1: , because you can’t do much without logging in - Priority 2: , because you need someone to send money to - Priority 3: , because this is our primary business metric 95% of tests must pass before the code is released to production. With such a large product (happy and sad paths, countless customers configurations) and hiccups like outages, illnesses etc., it is not possible to guarantee 100%. That’s why the remaining 5% of failing tests are retested automatically and/or checked manually by QA engineers

These are just some of the rules and processes we use for mobile app testing at Azimo. In a future post we’ll talk about Continuous Integration and Continuous Delivery environments, proper code, project structure and more. The Azimo Labs team also contributes to open source by sharing our internal projects like AutomationTestSupervisor (a framework for running and analysing Android tests) and our fastlane plugin for AVD management (see how other companies use it).

It’s worth remembering, however, that we didn’t start out like this. Our early days were as frantic and chaotic as any startup and our first tests were created after we built the MVP of the Azimo app. We learned the hard way how to do things properly, which is why we try to apply rigour and discipline to everything we do today. We’re sharing out knowledge here to remind everyone that it is never too late to start testing code. Today we’re a happier, smarter team that is focused on delivering value to our customers, not just fighting fires. 🔥🔥🔥🚒