In a world full of competition, companies want to release their software faster than ever. The reason for this is, they are afraid. Afraid of being second or losing market shares. Trying to be faster and faster, decisions are made and Unit tests often are the first thing to be defined as: “at a later time”. Developers create huge amounts of code without tests and feel quite comfortable in it. Day X arrives and it’s time to release the product. Since the software is not too large yet, they can just do manual testing during the release. Nothing wrong here! Seriously, manual testing is great!

The problem starts a few months later. At some points, manual testing takes longer and longer. Your app got more complex and has a lot more features. Furthermore, you’ll have to check for regression. This is where their prior cuts in quality start to show. A release cycle suddenly doesn’t take a few hours. Instead, it takes a few weeks. Gone are the releases every two weeks and one question pops up:

Testing stops us from releasing often, what can we do?

It’s a good question with multiple options. Just ignore the idea of decreasing your testing effort. This is a sure way hell. Another option would be to start at developer level. Let’s add Unit-Tests! Oh crap… Since we’ve neglected them, in the beginning, our code is not that easy to test… it would take months to add some basis for unit-testing. So what else can we do?

Looking at the entire process, we could add some kind of testing ideology. One step would be to do this with BDD. But this wouldn’t help us in a fast and surefire way.

So many companies start with automated UI-Testing. Let’s have the computer do the same steps we would do to test our app manually. We can run these tests every night (or every commit?) and it’s a lot faster than having the team sit down and do the manual labor.

Before continuing let me say this:

Automated UI-Testing is NOT your silver bullet to fix your quality!

But it is a cog in the whole machine. So just because it’s not your silver bullet don’t ignore it entirely.

Frameworks

There are a lot of different approaches on iOS. Some are more native, some are less. Let’s look at them individually.

XCUITest

A few years ago Apple discontinued their JavaScript Automation Framework and replaced it with XCUITest. This is an Apple supported native library meaning you can write your UI tests in Objective- C or Swift.

One thing I had to wrap my head around is, you don’t check for elements (labels, buttons, etc) containing a value. Instead, you check for the value to exist on screen. So when testing whether a text is displayed, it would look like this:

XCTAssert(app.staticTexts["Welcome"].exists)

XCUITest runs in its own process. This has some advantages but also disadvantages. It can see everything on screen, but you can’t check what the internal app state is. This is quite nice since it prevents developers to do mistakes involving knowledge a normal user doesn’t have. On the other hand, being in a separate process also means it needs some time to synchronize the app state. This takes time and slows down the testing. Furthermore, when doing some operations requiring time, it can result in errors due to XCUITest not finding the requested element.

To wait a small amount of time before failing, we can wait for elements to appear:

let goLabel = app.staticTexts["Go!"]

let exists = NSPredicate(format: "exists == true")

expectation(for: exists, evaluatedWithObject: goLabel, handler: nil)

This waits for an element with the text “Go!”. In case it exists within 5 seconds the test continues. Otherwise, it fails.

Another common use case is interacting with elements. Tapping a button is quite simple:

app.buttons["Add"].tap()

Writing text into a textfield is similar:

let textField = app.textFields["Username"]

textField.tap()

textField.typeText("<text to write>")

One last thing which confused me was the default handling of system dialogs. In case an UIAlert pops up XCUITest will automatically select the default button after a short period of time. This way you don’t have to automate things such as providing access to the user’s media library.

Since it’s introduction constant improvements are being made with every release of Xcode. The newest version adds options such as:

starting multiple apps at the same time (check out how they interact)

warm starting (sending the app into background and get it back)

taking screenshots

better async testing

So even if you decide XCUITest is not the right tool for you, you might want to keep track of it.

EarlGrey

Early 2016 Google released EarlGrey. The difference between XCUITest and this UI automation framework is EarlGrey and the app shares the same process. In this called GreyBox testing, the test can influence the shared memory, thus changing the runtime behavior of the app.

On their Github repository, Google provides detailed instructions to set it up.

Let's have a look how it compares to XCUITest!

Testing for text:

EarlGrey.select(elementWithMatcher: grey_text("Welcome"))

Interact with a button:

EarlGrey.select(elementWithMatcher: grey_accessibilityID("Add")).perform(grey_tap())

Write text into textfield:

EarlGrey.select(elementWithMatcher: grey_accessibilityID("Username")).perform(grey_typeText("<Text>"))

Regarding synchronization. Google claims due to EarlGrey sharing the same process, it does take care of the synchronization itself. Though you can have access to it, you probably don't need it.

Appium

Appium is different compared to XCUITest and EarlGrey. It's cross-platform and you don't have to use a native language. So the idea is for your test engineer or you to write tests in your preferred language and use these on all supported platforms. This can reduce the time of writing and maintaining tests by half since, in an ideal world, the tests work on Android and iOS the same. If you've ever worked on app supporting both platforms you already know the result. It's not realistic. Instead, you write the same tests for both platforms.

What puts me off most, is the speed. A simple login tests:

Start App Enter credentials Press Login Check if logged in

This took with Appium 5 (in words: FIVE) minutes to complete. The same is done on XCUITest or Appium in less than 10 seconds (which is still quite a long time).

Other Options

There are more options than the above listed. In case you want to stay native, there are KIF and Frank. Otherwise, if you like cucumber, there is a version for iOS supporting it called calabash.

Running on Device Farms

There are a lot of different device farms, but in my opinion, you should stick to the big players. Amazon is one of them. Sadly it doesn't support EarlGrey yet. Another option is the Xamarin Test Cloud.

This is changing constantly, so but my experience with AWS is great compared to smaller providers of test clouds.

Conclusion

Having used most of them, I’d stick to XCUITest or EarlGrey, but this is highly subject to preference. XCUITest runs with AWS but due to its extra process and resulting synching effort, it's slower. EarlGrey, on the other hand, is faster but doesn't run on AWS.

Picking up the basics is for both quite fast and you shouldn't have any problem switching between these. I'm still not happy about the execution speed and stability. Imagine having 100 tests taking each on average 15 seconds. You result at 25 minutes. Sounds reasonably fast compared to one day of manual testing for 5 people. But there are faster ways such as ui-less acceptance testing which we will elaborate on in a later post.

There are multiple reasons why UI tests fail. Sometimes the device/simulator was in the wrong initial state. Sometimes Xcode just couldn't connect to the app. Often it's something wrong in your app. Regardless of these reasons, every time something fails you need to check why. This can be tedious and prevent you from doing other work.

It's up to you to decide whether a time decrease of 99% during release testing is worth the trouble during the day, having to check every time a build/test run fails.

Further reading

Previous: Behavior Driven Development

Next: Mock Network Requests in UI-Tests