Success in the new age of software development depends on increasing the velocity of delivery -- meaning speed, agility, and efficiency -- while continuing to meet customer expectations when it comes to quality. Ultimately, creating excellent software faster comes down to an effective pipeline.

Part of making that pipeline effective is optimizing your automated testing and minimizing the false positives that those tests produce. When tests fail because code is flawed, progress is slowed, but at least it’s for a good reason. When tests fail for reasons apart from the code, that’s a complete waste of time.

False positives can be particularly challenging with UI testing, and we see this challenge often with Selenium and Appium, two popular test automation frameworks. We often find tests that were written with various assumptions that aren’t always reliable, especially in the world of modularized dynamic applications and changing network conditions.

False positives could be considered the arch nemesis of your beloved build and CI pipeline, crying wolf and instilling doubt in your developers and raising questions about the value of your test automation. Fortunately, organizations are taking these problems seriously, and we have observed a number of best practices around protecting against false positives in testing. Even better, most of them are very straightforward.

Here are eight ways to protect your build from false positives.

1. Reliable environment configurations.

It’s important for the “image” used for running the tests to be static, as small changes to the environment can cause unexpected problems with behavior and thus reliability.

2. Use a dedicated environment.

Ideally you want each test to run on a dedicated environment that has never been used before and will never be used again. Caching operating system state or trying to run multiple test sessions simultaneously on the same environment can cause conflicts -- especially regarding UI and focus events.

3. Keep your tests short.

It is a best practice to separate your tests into small modular pieces that focus on one piece of functionality. It’s easier to debug, reproduce the issue, and address it. A modular approach also encourages building features as small, self-contained components, which promotes reusability.

4. Keep your tests independent.

It’s important to avoid the temptation to write tests that exercise a comprehensive UI workflow. You want minimal setup to set state or navigate to the component and a teardown. Independent tests provide the bonus of enabling straightforward parallelization.

5. Use the right locators for object identification.

That means using IDs, names, and CSS locators instead of coordinates, UI, and so on. This also encourages you to build your application with testing in mind, making use of components that are self-contained and easy to interact with.

6. Set up and tear down application state.

This ensures that each and every script starts from a known or controlled state in the application.

7. Use dynamic object synchronization.

It’s good practice to implement a library to handle object synchronization in the framework. Faster execution and fewer false positives result from instructing the driver to wait for the UI to reach an expected condition before moving on to the next step.

8. Automate test reruns.

Having logic to detect failures and automatically rerun tests can help smooth out your build results. When a test fails three times in a row, you know it is really broken.

Following this playbook will help ensure your application’s health by enabling you to zero in on realpotential bugs. Teams will see their productivity increase as they stop wasting time on false positives, and as a result, quality code can be released sooner, allowing products to get to market faster.

Adam Christian is VP of Engineering at Sauce Labs, a cloud-based automated testing platform for Web and mobile applications.

New Tech Forum provides a venue to explore and discuss emerging enterprise technology in unprecedented depth and breadth. The selection is subjective, based on our pick of the technologies we believe to be important and of greatest interest to InfoWorld readers. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Send all inquiries to newtechforum@infoworld.com.