Parallel testing: get feedback earlier, release faster

How to test your iOS apps with Xcode 10 effectively

As the only person responsible for testing the Azimo iOS app, I encounter problems of all sizes. When I started at Azimo nearly four years ago, testing was 100% manual. A full test cycle could take up to two days 😱. Some functionality wasn’t tested at all, let alone on different devices and operating systems.

These days, only 15% of our testing is manual. Automation has made us faster and more efficient. In 2018, we 99.9% of our users were crash-free. Due to the complexity of our application, however, a full test cycle took around eight hours. More than half of this time was taken up by automated tests. Given that we release at least once per week, we needed to find a way to reduce our test cycle even further.

Fortunately, on June 5th, a revolution arrived. At the WWDC 2018 conference, Apple announced support for parallel testing in Xcode 10. This reduced our cycle of 250 automated tests to around 55 minutes and made my life a lot less stressful in the process 😎.

Integration takes ~56 minutes on 5 simulators with 247 test cases — 98.38% passed

In this article, I will guide you through the benefits of parallel testing as we have experienced them at Azimo. I will outline what we test, how we configured our machine and the problems that we encountered. I will also provide some tips and tricks that will help you to get the most out of parallel testing.

Testing in parallel on 5 simulators

Parallel testing in UI Tests

Parallel testing was introduced for iOS devices in Xcode 9. Since then, we have been able to test different classes in parallel on different devices. The Xcode 9 version, however, required us to use the command line with appropriate parameters.

Xcode 10

The revolutionary feature of Xcode 10 is an automation tester — the ability to run different test classes simultaneously on multiple simulators directly from the Xcode level, with far less effort from a human tester. It’s as simple as enabling the feature:

How to enable parallel testing option

In this mode, Xcode creates several clones of the same simulator (e.g. if we use the iPhone X simulator, then Xcode creates simulated iPhone X clones). This only works for simulated devices. We will tackle parallel testing on physical devices later in the article.

Benefits:

Simple, intuitive configuration

Faster test execution

Faster feedback

Allows for more releases

Check the application server on multiple devices at the same time (request, timeouts etc.)

No need to create further test targets, to distribute test classes between them

Great integration with CI

Drawbacks:

In the case of an unstable back-end / application server the possibility of occurring timeouts (several requests at the same time may cause longer responses)

Plenty of processing power needed to keep simulators stable

Flakiness — if any of the above problems occur

Cloning the same device / system configuration, not being able to test at the same time on several different configurations (we must use the command line)

Assumptions:

The basic condition that must be met to use parallel testing is the independence of test cases. For example:

We cannot rely on test B starting only after test A is complete

A faster execution or result of test B cannot affect the results of test A and vice versa

Useful parameters:

maximum-concurrent-test-simulator-destinations NUMBER the maximum number of simulator destinations to test on concurrently parallel-testing-enabled YES|NO overrides the per-target setting in the scheme parallel-testing-worker-count NUMBER the exact number of test runners that will be spawned during parallel testing maximum-parallel-testing-workers NUMBER the maximum number of test runners that will be spawned during parallel testing

Effective division of tests

To maximise the advantages of parallel testing, we must divide the test cases into classes. Let’s use our international money transfer application, Azimo, as an example, and assume that our full test cycle contains five test cases.

The first option, which we definitely don’t recommend, is to create one large class containing all five test cases:

It is far better to create several smaller test classes:

In the first example, we have no way to test in parallel because Xcode assigns test classes to simulators: namely one free test class is allocated to one free device. Even if we have five devices available, only one can be used. In the second case, each class can be tested on a separate device.

If you have more classes than available devices, say six classes and only five devices, the first device that completes testing will receive the sixth and final class.

Run in parallel via command line

If we want to test in parallel on physical devices, we are forced to use the command line. There are several ways to do this.

For parallel testing using the command line, we need the ID of our devices. The first step is to display all connected devices and simulators together with their ID, using the following command:

xcrun instruments -s devices

Using the ID of a specific physical device, we can run the command responsible for running parallel testing.

In the project folder, we run the following command:

xcodebuild \

-scheme Azimo \

-destination id={deviceID1} \

-destination id={deviceID2} \

test

This command will run the same test classes in parallel on two different devices. This approach allows us to test on different devices and systems simultaneously, but doesn’t save us any time.

To speed things up, we can manually separate classes into devices. First we need to build our testing project with the following command:

xcodebuild -project Azimo.xcodeproj -scheme Azimo -destination id={deviceID} build-for-testing

Then we divide our test classes into the number of available devices. For example, with four test classes and two available devices, it is best to divide the two test classes into each device as follows:

xcodebuild \

-scheme Azimo \

-destination id={deviceID1} \

-only-testing: Azimo/login_tests \

-only-testing: Azimo/register_tests \

test-without-building & xcodebuild \

-scheme Azimo \

-destination id={deviceID2} \

-only-testing: Azimo/createRecipient_tests \

-only-testing: Azimo/createTransfer_tests \

test-without-building &

In the case mentioned above, separating the classes into devices saves a lot of time. The optimal solution would be to use four devices, one for each class, but this isn’t always possible.

Another way to divide test classes is to create a new test target, eg Azimo1, and to include there appropriate test classes other than in the Azimo Target.

Adding classes to the test target

Assuming that the Azimo target has different test classes than the Azimo1 target and vice versa, we run parallel testing on both targets on different devices by using the following command:

xcodebuild \

-scheme Azimo \

-destination id={deviceID1} \

test-without-building & \ xcodebuild \

-scheme Azimo1 \

-destination id={deviceID2} \

test-without-building &

The test execution time will be equal to the execution time using the -only-testing parameter together with the determination of the test classes.

As you can see, the above methods work but require some human effort (creating new targets, separating tests between targets, maintaining targets when adding new cases / test classes). Xcode 10 brought substantial benefits that help with this.

Continuous Integration

Since Xcode 9, Apple has supported a built-in Xcode server. Previously, we had to use an additional OS X server application. Creating and configuring an Xcode server is simple: go to Xcode -> Preferences -> Server & Bots, and create new Xcode server with the correct credentials.

After enabling and configuring an Xcode server, next we must create a bot to run integration with the given configuration. In our case, it’s a bot called UITestsDaily. The bot runs a full cycle of application tests at our local CI every day at midnight.

Command to force running tests on 5 simulators in parallel

Personally, by configuring the bot through parameters parallel-testing-worker-count with the value 5, I enforce running tests on five simulators at the same time.

At Azimo, we use an iMac (specification below) with Xcode 10 installed. This serves as our local continuous integration.

Why use the solution above?

Each bot has an integration history (screenshots, logs etc.)

On our personal computer we can implement / debug other test scenarios while integration is tested on CI

More resources (memory, processor) — while allows us to run more simulators in parallel testing

Continuous Integration specification

The above specification allows you to run parallel testing on five stable simulators. My MacBook Pro 15” (16GB RAM, Intel i7 2.2GHZ) allows up to 3 simulators. This allows us to save a lot of time.

Disadvantages:

Requires a dedicated, hi-spec computer

Large integrations use a lot of memory

No easy way to clean old and unnecessary data

No flexibility in bot configuration (for example, we can not set integration to start every 30 minutes)

Test reporting

After every integration started on our CI, we want to know what % of tests were successful. One option is to connect to the CI and enter a specific integration, then manually check. A better option is to use the Slack + Fastlane connection, which sends notifications to Slack immediately after completing the integration:

Notification on slack after integration 98.38% passed

As you can see, the above integration provides us with a lot of interesting information, including:

Branch on which integration was carried out

Name of the bot on which the integration was made

Number of tests in the integration

Number of failed tests

Percentage of tests completed successfully

Triggers

Each of the created bots has the tab Triggers. There we can determine what should be called before and after each integration. Properly written scripts can speed up debugging — searching for reasons for integration errors (eg, attempting to test for a device that does not exist, running a script from a wrong directory, etc.). On the example of integration of UITests Daily, it will show what we use to improve our work.

Pre-integration scripts — commands invoked before integration

Log information

Print of environmental variables

Print a list of all available simulators

Print all files in the current folder

Print the path to the current folder

Clean

Cleaning simulators before running the testing

Opening a folder with a project

Post-integration scripts — commands to be invoked after integration

Log information

Print of environmental variables

Print the path to the current folder

Send status

Opening the home directory so that we can benefit from sending status from integration into Slack

Print the path to the current folder

LANG=en_US.UTF-8 — setting locale in a terminal session export PATH=/usr/local/bin:$PATH — adding a path to usr / local / bin to the path variable (in our case to facilitate access to the use of Fastlane)

echo: “…..:: all env variables ::…..”,

“…..:: send status ::…..”,

“…..:: simulators list ::…..”,

“…..:: list current folder ::…..”

the above printouts make it easier to search in the logs of the values we are interested in.

In future: testing in cloud

I hope that this article provided some useful information about the benefits of parallel testing. While parallel testing has saved us huge amounts of time and effort, it still has hardware limitations (you always need more devices/processing power). Our next step is to start testing in the cloud, where resources are almost limitless. I will write about this in my next article.

If you have any questions, feel free to post in the comment section 🤓.