March 23, 2018

Two years ago, I joined Dimagi as a Quality Assurance Analyst to help the development team test new features and create and maintain our test plans. I didn’t have a background in Computer Science and was focused mostly on manual and functional testing. Early on, I spent most of my time focused on the Android component of CommCare, our customizable, open source mobile data collection tool. What I did in testing ranged from basic checks, like ensuring forms could be submitted or certain question types work, to more complex actions involving multiple complex workflows with multiple devices or issues arising from updating software versions. Our test plan for the mobile product had 331 test cases that covered the functions of the core application. Running through these tests was a completely manual process. Two people would spend 10 days working on nothing else. That is a whole lot of time to be spending on one aspect of your product each month, even one as complex as CommCare.

That said, at that time, I didn’t see 10 days as being too long or bad. As we continued to enhance mobile experience and add functionality, more test cases were added to the plan. As the test plan grew, it started to take longer to run the test plan with each release, hitting 12 days testing time after the first few releases I was involved in. Predictably, QA began to become a bottleneck. As the process slowed down, our mobile team could not reliably release new features each month. We were faced with a problem that would only continue to grow as we added to and improved upon the product. So we decided to do something about it by building out an automated test infrastructure.

We began evaluating a few tools and working on a test plan that we could follow for our automation scripts. While the mobile team began working on a framework, I began work on the test plan. Once both these were in place, the teams began scripting. The end result spoke for itself. We automated over 100 test cases from the test plan. With this simple change, manual testing that once took us most of the month now took less than a week. The automated tests we set up take about 1 hour to run from start to finish. It takes us 3 days to complete the rest of the test plan and ensure what we couldn’t automate works. This has easily saved us hundreds of hours in testing since we implemented this system. In addition, if test cases for a new feature can be automated, anyone on the team can now write a script due to the simple, yet robust framework. Taking the time set up our framework and automate the test plan has completely changed how we handle mobile testing for the better.

Automated Testing

When I first started as a tester, I knew very little about automation concepts and tools. I had done a bit of research, but most of what I knew came from hearing other colleagues in the QA field tout the benefits of automating their tests. After jumping in, the benefits became obvious to me as well. I no longer had to focus on the most basic aspects of our application when testing. I knew those were going to be covered by the automated tests that we were now running on a nightly basis using our continuous integration pipeline. I could finally focus my manual efforts on the more complex functionality and testing as a real user would.

One of the key reasons this was a successful transition to automation was due to the tools the team chose, which were Calabash-Android and Cucumber. With these frameworks, we were able to create a system that less technical individuals could use, as the scripts are written in a human-readable language. What this means is that you write a series of ruby code that corresponds to a sentence in your script (ie. “Then I install this application” will call ruby code that performs that action on the mobile device). As someone without a coding background, this was invaluable. I could still write human-readable scripts and use the opportunity to write out my own ruby code or get help from the mobile team if I was stumped on how to write a step. It was a game changer in becoming familiar with automation concepts and coding itself.

What You Should Know: Getting Started with Calabash-Android

Looking back to when I first started learning to use Calabash-Android, there was a lot that I wish I knew before I actually started scripting and writing step definitions. Calabash-Android is a powerful tool and it can take some time to learn how to fully utilize the system. The next few sections are some of the things I wish I had known when I first started scripting with Calabash-Android.

Going forward, I am going to assume you have had some exposure to Calabash-Android. If you have not, Calabash-Android’s GitHub page provides a great overview and instructions for getting everything installed and setup. You will need to install a few different tools, like Ruby and Android Studio Development Kit (SDK) to get started. I highly recommend reviewing their documentation when you have a chance, as it provides a detailed overview of how to get started.

Customizing Steps to Improve Reliability

To get started, Calabash provides a few different baked in steps that you can use for scripting. While these are useful steps to get started, I have found that they can be brittle and certain ones fail in scenarios when used. For this reason, I would recommend writing your owns steps and ruby code. In some of the following examples, you will likely notice this string “([^\”]*)”. This string indicates a parameter that you can define when you write your scenarios. For example, here is an installation step that we use:

Then (/^I install the “([^\”]*)” apk$/) do |apk|

system(“adb install -r features/resource_files/apks/%s.apk” % apk)

end



To call the step in the scenario, we would enter the following:

Then I install the “2.40” apk

When the step is triggered in the scenario, the ruby code will look for the 2.40.apk file stored in the folder to trigger the install.

Creating Shortcuts to Write Maintainable Steps and Scripts

As I started scripting, I noticed there were times when I would use the same step definitions multiple times in a row. More often than not, this was occurring when trying to navigate through the application. As an example, a lot of our tests were set up in such a way that you would return to the home screen before initiating the next test. Using the baked in calabash steps, we would need to write “Then I go back” for each screen that we would touch to get back to the home screen. It gets pretty unwieldy writing that several times in a scenario, and simply increases the number of places that the script can fail. So we wrote one step definition that returns us to the home screen to use whenever we need to go back to it:

Then ( /^I go back to the home screen$/ ) do

while current_activity() != “StandardHomeActivity”

press_back_button

if element_exists( “* {text CONTAINS[c] ‘EXIT WITHOUT SAVING’}” )

tap_when_element_exists( “* {text CONTAINS[c] ‘EXIT WITHOUT SAVING’}”

end

sleep 1

end

end

Creating shortcuts like this can keep your ruby code and scripts more manageable and improve the reliability of your automated tests.

Reducing Code by Combining Tasks

As I began running test scripts more frequently, I noticed they would time out pretty often. It turns out that Calabash-Android sometimes initiates steps more quickly than the application can keep up. This results in brittle tests and timeout errors. To counter this, I started to liberally use the command Then I wait, which is not a great habit to get into. It adds unnecessary length to your scripts and creates more failure points. To address this, one of our mobile developers thankfully pointed out that we should use a combination of ruby commands that mimicked the wait functionality.

The ruby commands wait_for_element_exists, wait_for_element_does_not_exist and tap_when_element_exists became some of the most used commands in my step definitions. These commands will all wait for a specific element to be on the screen before triggering:

Then ( /^I select module “([^\”]*)”$/ ) do |text|

wait_for_element_exists( “* id:’screen_suite_menu_list'” )

tap_when_element_exists( “* {text CONTAINS[c] ‘ #{text} ‘}” )

end

Querying for the Correct Element

In order to fully utilize the wait_for_element_exists, wait_for_element_does_not_exist and tap_when_element_exists, we needed to be able to identify the specific element we were waiting for. By leveraging the query function (“*”), we were able to create more reliable step definitions. In the last set of code from the previous section, there are two different statements that utilize the query functionality in the wait_for_element_exists and tap_when_element_exists which, utilize the * symbol. Let’s examine the first:

wait_for_element_exists(“* id:’screen_suite_menu_list'”)

The * indicates that the query will compare against everything visible on the screen. The id portion indicates that you are looking for a specific string id. So, when used in conjunction with the wait_for_element_exists, your step definition is saying to wait until the element with id ‘screen_suite_menu_list’ has appeared on the screen. The next command is performing a similar action:

tap_when_element_exists(“* {text CONTAINS[c] ‘#{text}‘}”)

However, this command is now saying to click the element that contains the text that was specified in the step. This combination of commands can be extremely powerful in ensuring your scripts run properly and you are selecting the appropriate element.

Using the Console to Run Commands

One of the challenges I found in writing step definitions was knowing what elements to interact with or what IDs to use. This made it difficult to write step definitions and I found myself trying to rework step definitions to fit my needs or bugging the mobile developers for assistance with writing step definitions. After finding out about the query and console functionalities, this became much easier. By connecting a physical device to your computer and opening the console, it is possible to run Calabash-Android commands against the device. For more detailed information on how to get the console mode started, I highly recommend reviewing Calabash-Android’s documentation here.

By using this functionality, I could identify all the information necessary to write my own step definitions. I could now readily find the ID’s or identify if there were multiple of the same elements on the screen. But the console’s usefulness does not just stop at being able to query and find elements. I could also test the ruby code I was going to use in my step definitions. By going to the screen I was writing step definitions for on my mobile device and running the ruby code in my console, I could determine if the code I was implementing was going to act as I thought it would before I ran my script. This vastly improved my scripting speed, as I could now test my code on the fly.

Web Interactions to Validate Server Behavior

As we improved upon our scripting abilities, one of the more interesting ways we began using Calabash-Android to validate server behaviors. With the way our mobile platform works, we often need to ensure that forms were submitted correctly. To do this, we use python commands in a separate library to interact with our web services APIs to ensure various actions occurred:

Then ( /^I check form was uploaded$/ ) do

was_upload_success = system( “python3 commcare-hq-api/utils.py assert_newer_form” )

if not was_upload_success

fail( “No new form submission since the last check” )

end

end

With this particular step, we are checking to confirm the form was submitted successfully to the server by interacting with our APIs. This simple addition now allows us to perform end to end testing and ensure that not only does the mobile platform work but that our integrations with the web server are also functioning as intended. By using custom libraries to run Ruby and Python scripts together using Calabash-Android, we have enhanced our ability to test some of the more complex areas of our code base. The more we use the tool, the more ways we find to improve our test coverage. If you want to check out some other ways we run Ruby and Python code together with Calabash-Android, you are welcome to check out our step definitions.

Conclusion

Automation testing with Calabash-Android completely changed the way we handled testing. From a QA standpoint, we have saved hundreds of hours due to no longer having to check the most basic components of the application. We can focus on testing like a real user and target the more complex features. But more importantly, we’ve integrated these tests with our continuous integration pipeline. Now, our automated tests trigger nightly on a deploy. We can get rapid feedback on any changes, so bugs and regressions are caught before we even initiate QA. Check out this excellent blog post that my colleague, Will, wrote that provides detail into how we handle this aspect of our automated testing. I highly recommend reading it to see some of the cooler stuff we are doing with the tool!

If you’re new to automation in general, like I was, I hope this post has given you some confidence to get started with any tool. While automation won’t ever replace good ol’ manual testing, it can certainly help supplement your efforts and free up your time to focus on the test cases that matter. Good luck scripting and thank you for reading!