Rethink Your Automation Setup

By Stratos Ioannidis

After spending countless hours working around framework issues, maintaining tests that are more hardcoded than the ten commandments, and investigating test failures that appear only during your absence, you realise something is wrong. UI automation shouldn’t have to be this hard to deal with or maintain.

Whether it’s unit tests, integration tests, or UI tests, there are plenty of posts and stories online that explain how those work. However, my team and I found there aren’t many stories which point out the challenges along the way and how to overcome them. Considering the amount of time we spent, I decided to document our story in an attempt to guide others facing the same problem and help them avoid pitfalls related to refactoring or retooling your automation.

What follows are recommendations and examples of how to approach the process of discovery, management buy-in, and developing better coding practices and frameworks for automation. These tips and suggestions might not work for everyone depending on what your development process is, but hopefully, it can give you or your team ideas and tips to begin your own process.

Refactoring More & Testing Less

The problems of maintenance, refactoring, and trying to work around an old framework, along with a number of other symptoms, are good indications that your test automation setup doesn’t suit your current product needs. Here are some examples you might use to determine if what you have currently isn’t working:

You spend more than 50% of your time maintaining tests

You’re not sure how to structure new tests

The developers are afraid to go anywhere near the test code

The tests fail constantly with no code changes

An example of a messy test suite file from our previous setup:



Starting Over With A New Automation Process

Realising the issue is the easy part and the way to a new setup will likely not be paved with gold. As well as finding the right tools, concept and implementing it, depending on the company you are working in; you might have to persuade your team, boss or even the entire company about your intentions.

Step 1. Get Management Support

Test automation is an investment. Before you think about anything, you need to find out if the company is willing to spend time on test automation. There are plenty of case studies to back you up and tools to help you calculate the return on investment.

Not all products need test automation, more traditional companies with long release cycles might not require it. Even if you are working in a fast-paced agile team with multiple releases per week, you will be lucky if there is no resistance to change. Having exact figures, estimations and a solid plan are necessary to convince the right people.

Step 2. Choose Your Tools

Do the research. Make a matrix of the pros and cons of each framework and choose one, or a combination, based on your needs. Choosing the wrong tools will, at best, cost you a lot of time during prototyping and, at worst, leave you with an unusable test automation setup.

Things to consider:

Your end goal or goals - e.g. Cross-browser automated testing. Paid vs open source solutions - Since paid solutions usually provide a trial period you can review both options and pick a tool that matches the company’s needs and budget. Tool support - It makes a huge difference if a framework is used by thousands of users compared to a handful. An active community means regular maintenance and it’s easier to find help with issues. Tool language - Choosing a tool that uses the same programming language as your team is essential if you plan to ask for development or maintenance help. Existing knowledge - Ask the team about existing knowledge, you shouldn’t introduce a new tool if there is already one in place.

Step 3. Prototype

Create the most basic setup and check that works for you. If you completed step two properly you will have proof that the setup works. A simple scenario will give you an idea of how the framework is performing and approximately how long your test suite will take to execute.

A successful prototype should be:

Configurable Easy to comprehend Fast to run Extendable Fast to develop

Selenium Easy have developed a simple prototype example you could use.

Step 4. Get Feedback

Ensure everyone is onboard with the selection of tools by showing off the working prototype to team members, other teams, and any interested stakeholders.

Explaining the reasons why you selected the specific tools, how you approached the prototype, and how to extend it, are important points in order to provide a full picture to your team. Getting feedback early will allow you to tailor the setup to the team’s needs.

Implementing Change

Tormented by issues like unreliable tests, high maintenance costs, and time-consuming test authoring, our team decided to ditch the old test framework and start from scratch.

More specifically we decided to change the following:

Tools

Moved from casperjs, to nightwatchJS and Selenium. Our criteria were parallel and cross-browser capabilities and at this point in time nightwatchJS was one of the few that provided those out of the box. As PhantomJS fades out and headless browsers advance, there was a need to move to a more universal framework.

Utilized a Selenium grid setup with docker and Zalenium. This meant faster development as every developer could point to the same selenium server without the need for a local setup.

UI elements

Changing the tools ensured it was easier to maintain and write tests but there were also a number of flaky tests as a result of inconsistent coding. Therefore we decided to also proceed with a test refactoring.

Created one unique HTML attribute for every HTML element we wanted to interact with. We got rid of complicated selectors that refer to CSS classes and with support from the developers, made it easy to add custom HTML attributes to any element.

Moved all the CSS and XPath selectors out of the tests to a page object file. This made it possible to re-use selectors and react to changes faster.

Execution

Many of the test failures were the result of wrong test data used by the automated tests. Changing the data handling was a necessary step to clean up flaky tests.

Made the tests data driven by providing different data sets based on the environment under test. Constantly having three to five environments and talking to different APIs caused a lot of pain until we utilised global variables that changed based on the environment under test. Nightwatch provides it out of the box. Again, this made maintenance faster and the authoring of new tests easier.

Setup and teardown of test data before and after the tests by calling the API directly. In the past we had UI tests generating test data for the next tests, this was horrible, impractical and extremely unreliable.

Structure

Part of the test refactoring was also the division of the tests into smaller units in order to make them more manageable and reduce the overall running time.

Divide the tests into test suites to make them faster. You probably don’t need every test to run after every commit, distinguish the ones you need daily from the ones you want to run once a week or overnight.

Rewrote the tests to small individual scenarios instead of having end-to-end tests. From our past experience with Selenium and other UI automation tools, we knew that the larger the test, the more likely it was to have a false failure.

Configured the framework to run the tests in parallel , in multiple browsers in order to increase the coverage and cut down execution time.

Produced clear reports that make debugging easier. There is nothing worse than a failed assertion that is hard to track. Furthermore, by using JUnit reports we visualized the test results on Jenkins and were able to identify when most failures happened and what caused them.

Moved to an on-demand setup where new Selenium nodes are spawned based on browser and test requirements.

Results

We spend less time maintaining tests - The new setup allowed us to save a lot of time since we didn’t have that many flaky tests anymore. More specifically, we spend ~85% less time maintaining the tests.

The tests are now faster - Although the new framework is running on a real browser -making it slower- we can still run the entire smoke test suite in under 5 minutes compared to 25, as we are now running the tests in parallel.

Developing new tests is faster - Having better structured tests and re-using components allowed us to save ~50% of developing time.

Developers can now maintain the tests - Involving the developers in the process gave them the confidence to contribute to the testing efforts.

The tests are now accessible by anyone - Without the need for a local setup, anyone with the code can execute the tests in any browser, without the need of a local setup

Confidence In Test Coverage

Did all those steps make the tests 100% reliable? No, but it increased that percentage along with debugging and updating the tests much more easily. We don’t use the test automation to get rid of manual testing but as a way of early feedback and yet another safety net.

Regardless of the state of your test automation setup, UI tests are a fraction of the overall testing and if treated as panacea it can devour the team’s time without returning the investment.

On the other hand, if the right decisions are taken in all of the four steps, it can provide confidence, coverage and a great ally to your testing.

References

Author Bio:

Stratos finished his master degree in Cultural Informatics in 2012 and since then has been working as a QA Engineer for several companies before settling in EyeEm, where he manages the testing efforts of the web platform.