A/B testing is a must-have thing for any product manager and API-based platforms help you build an advanced A/B testing infrastructure in no time.

Building an A/B testing infrastructure

Tech giants like Netflix or Pinterest invest a small fortune into highly customized user experiment platforms. Smaller projects end up integrating limited in functionality but easy to deploy SaaS tools like Optimizely or VWO. Is there a middle ground? In this article, I’d like to present four affordable cloud tools which can help you build the right product by enabling advanced A/B tests throughout the customer journey.

A/B Testing SEO



Let’s start off with the acquisition part. Successful online products cannot skip search engine optimization in their efforts to attract new customers. SEO monitors, on-page optimizations, hiring freelance copywriters, backlink rescue email campaigns – some companies employ massive resources to climb Google’s first-page ladder. If this is the case for your business, you should consider adding Rankscience to your tactics.

Rankscience (RS) is an early stage startup which wants to automate the A/B testing of organic traffic. How does it work? The first and non-optional step is to channel your traffic through RS’s CDN (they promise its nominal latency is fast < 25ms). RS claims that, technically, this can be a 2-minute change for most customers. Having this in place, RS can roll out content experiments, for example, they can take your landing page and host 2 variants, each one with a different title. After a given time, the RS dashboard shows which variant has won, i.e. is higher in SERP. All of this is done automatically.

RankScience claims an average boost to organic search traffic of 37 percent within three months, arguing such gains are a substantial step up from the competition (Techcrunch)

A/B Testing User Experience

Supposing you’ve convinced your visitors to get onto your website. Now, you need to overcome the equally hard challenge – onboarding. Onboarding involves a plethora of different things for various types of online business, but there is one thing in common – once you build a user-friendly application, the odds for the conversion become higher.

But there’s no silver-bullet solution to achieving a top-notch UX. It’s your job as product manager to iterate until you find it. In software, the iteration consists of 3 parts: design, build, measure. Now, your task is to run it as fast as possible. Sounds easy, but when you want to iterate over many features in parallel, things get complicated pretty quickly. This problem is undertaken by LaunchDarkly (LD).

‍A huge thing for us is risk. LaunchDarkly takes risk off the table – so says a LaunchDarkly customer

LD offers an API-first platform to run experiments with “feature flags”. Thanks to LD, you can release features when you want and to the customers you want, taking the burden of the roll-out off the developer’s shoulders. LD shows the metrics of which variants perform better and lead to better user experience overall. Lastly, when the feature/variant goes totally south, you can kill it with the click of a button.

A/B Testing Promotions

When you get to the point where your customers are pleased with your intuitive UI, the last step is to convince them with your offer. Benefits can be multiple but, usually, everything boils down to the cost. You can do many things to change the perception of cost and these things can be tested with Voucherify. It’s an API-first platform to launch personalized promotions faster.

Voucherify building blocks save you time and internal resources you’d otherwise spend in development and let your marketing team focus on growing and retaining your customer base. The software enables testing of multiple coupon, discount, referral, and loyalty campaigns against your customer segments. The API and programmatic building blocks allow you to create highly personalized incentives built on top of the CRM data. Next, it automates promotion distribution by integrating with your email, SMS, landing page, push notification, mobile app, and other channels. Finally, when a promotional campaign is live, Voucherify’s dashboard shows how it performs and whether there are any incidents with the redemption.

‍Has Voucherify helped us to improve our sales performance? Definitely! It’s a little bit dependent on the target group and the product selection but most of the campaigns have a rate of 5% or higher! – says a Voucherify user

Building a cost-effective mobile A/B testing infrastructure with GTM

The pillar of conversion rate optimization is a profound A/B testing infrastructure; while there are well-described go-to solutions for the web, the mobile industry hasn’t nominated a leader yet. Here, I'd like to focus on one of the most cost-effective setups – content experiments with Google Tag Manager (GTM).

Utilizing GTM for mobile content experiments hadn’t been a popular idea for our team, that is until Amazon announced they would discontinue the support for their A/B testing service, which we used. We had to find a substitute so we decided to look through the tools we had already been using. This led us to Google Tag Manager.

After reading the Content Experiments feature description, it turned out that this can be a viable option. Why? Because:

It’s free

It’s a tool that we already know,

And, we could actually let our marketers optimize in-app marketing without needing developers' help.

We decided to give it a go and we’ve stuck with it since. Now, we’d like to showcase this tool so that you can figure out if can be a fit for your mobile A/B testing efforts too.

What is GTM?

A word of explanation if you aren’t familiar with the Tag Manager. Its primary use case is to simplify tracking management on web and mobile apps. It gives you the ability to add and update your own tags for conversion tracking, site analytics or remarketing without the need to wait for website code updates. But, on top of that, it offers the Experiments API. Here’s how it works.

The Experiment

Let’s come up with a story to better illustrate the problem GTM solves. Let’s assume that we want to give Rachel (the marketer) the ability to set up multi-variate tests in your mobile app.

She wants to test a simple scenario – to check how two different headlines influence user engagement. She assumes that the current copy would yield worse results than the new ideas she has at the back of her mind and she wants to measure engagement with the session duration.

So how can we approach this case with GTM? it would look something like this:

Rachel uses GTM to come up with new experiments. The app will connect to GTM to receive the experiment parameters that will modify its appearance and behavior. The app sends the user tracking information back to GTM, which in turn pushes them to Google Analytics.

Now that we have a process overview, let’s break it down and run through every step from the bottom up.

Prerequisites

The very first step is to sign up for a free GTM account (if you don’t have one). At this stage, you should consider setting up a Google Analytics tag for tracking. It takes only a few minutes with this tutorial.

When this is done, we need to configure GA to work with mobile content experiments. To do so, you should add a special Mobile View and then link it with GTM.

Variations

Having GTM all set up, we can get down to the experiment design. We start by defining what content parts will be varying in the app. This will tell us how to map the variations in the GTM wizard and will also allow developers to update the code respectively.

So, in Rachel’s case this is straightforward; she just has 2 versions of the headline, the original copy and the one you want to test the original against:

“The original call to action”

“The call to action example you can’t help but click”

To include these variants into GTM, go to Variables and create a new Google Analytics Content Experiment.

In the wizard, the first step is to put the original and the variation’s parameters into the editor. To do so, click on the “Original” item and type the headline parameter with the corresponding value - as in the picture below. Do the same for the first Variation. (Note that the editor supports JSON meaning you can input complex, also nested variable structures too).

The second step is to choose the Experiment Objective. As said, Rachel is going to measure the time users spend in the app. This is one of the built-in objective measures in GTM, so you can just select it from the dropdown menu. (Bear in mind she can also use the goals she’s already been tracking in Google Analytics, like signups, conversions etc.).

Statistics

Now that she defined what to show to users, let’s define when to do it. GTM comes to the rescue here too, it gives an easy way of defining the frequency of variations exposure. See the picture below, it’s self-explanatory:

There’s something worth highlighting at this point though: all the statistical work is done completely by the Tag Manager. Developers don’t need to be aware of the number of variations or any other thresholds Rachel modifies in her experiment. The Tag Manager SDK will handle variants distribution itself and will report back the fact that the user has seen a particular variation.

GTM also offers an additional set of rules to control when the particular experiment should be active. E.g. Rachel may want to push out the experiment only to people using her app in a particular version. She can configure it herself within the wizard:

You should also know that GTM comes with a plethora of conditions (also custom variables) she can use to enable or disable users from the experiment. Imagine she wants to target only premium users or limit the experiment to Germany, this is all at her fingertips with the rule wizard.

Deployment and publishing

Alright, so Rachel defined her first experiment. Before she can make it live, she needs to confirm it with developers. The reason why she needs to do this is that the dev team needs to verify that they use the same variables keys in the app code. Otherwise, it simply won’t work.

Once that’s done, she can kick off the experiment. It comes down to clicking the Publish button and her variants will be underway.

Rachel can evaluate how her variations have been performing in the Google Analytics Experiment section.

As described above, during the experiment GTM takes care of selecting the users who will be exposed to variants. It will also choose the winning variation when the experiment ends. So, all in all, there’s really no need for developers to assist at any other stage of the campaign.

This is just a brief introduction to the Mobile Content Experiments with GTM. The tool offers way more than we demonstrated and we encourage you to experiment (pun intended) with it yourself. The learning curve is fairly low, plus, if you already use GTM for other tracking purposes, a solid and free A/B testing infrastructure might not be as far away as you think.

API-based programmatic platforms help you build an advanced A/B testing infrastructure in a matter of days instead of months. Experiments can be integrated into your current ecosystem quicker, results are provided almost real-time, and operational and developer effort is reduced. With their affordable pricing and ability to start small, there’s less excuse for not having a robust experimenting environment for your product. Make sure, though, that the software vendors you consider are enterprise-ready.

{{CTA}}

Are you ready to test your promotions?

Let's talk

{{ENDCTA}}