There are many questions surrounding running tests – how many tests can you run? What is the success percentage of your tests? Why is that important? And why do we run so many test at The Next Web?

At The Next Web we ran around 200 tests this year and in 2016, we’ll aim for a couple hundred more. Why? Because we believe in testing and in doing marketing in adata driven way.

We test a lot to make sure we’re not making decisions based on somebody else’s opinion. Why 200 plus tests? We have a lot of traffic and the right people working on the CRO of the site, so we are able to run testing at a high velocity. More on the process on how we run our testing can be found here. However, in this blogpost I’d like to give you an inside look as to why we have such a high velocity and how that could help you decide whether testing is useful for you.

“What are you missing out on by not testing more”

What percentage of tests are really successful?

Do you know how many times the new A/B test variants you created didn’t win? You’re probably thinking of all the times that you were right but industry averages for A/B testing were around a 20 percent success rate. So four out of five A/B tests that you come up with have a winning variant that was already the original one running on your site. Isn’t that a waste of time?

At The Next Web we’re luckily enough to do it a bit better. In our case we see that the variants that we come up with have a win rate of around 30 percent. That means that almost 50 percent of the time we are higher than industry averages – not bad. But as you can see, even after 200 tests on a yearly basis, we can’t come up with an opinion on something that works 100 percent like we intend.

Isn’t that a waste of time, if they’re not all successful?

No, no, no, absolutely not! The 70 percent of tests that don’t have a winning variant are still considered successful. Maybe even more than most people think a ‘losing’ test can provide you with a lot of learnings. In any case you at least proved what doesn’t work, which will help you find a winning variant next time

But what do we learn? That we’re always wrong?

It’s a question we get often and the easy answer is: certainly not. It might feel like a waste of time since you’re wrong most of the time, but it also costs ‘money’ if we don’t know what is working for our users. What happens if we put something live without testing it? Instead of just losing a testing period, for example, you could lose revenue for the rest of the year by just putting something live.

“Not testing will cost more money in the long run”

How many tests can you run?

Do you really know how many tests you can run for your company’s website? You probably don’t, right? But I bet you want to know. Luckily there are some ways to figure out how long it takes to run a certain test meaning you can also figure out how many tests you can run annually.

Use case: I want my users to click on the contact button on the homepage. If, on a monthly basis, you have 10.000 visitors and in that time period around a 1000 clicks on the contact button,you can use these numbers (and a test calculator) to calculate the number of visits it would take to reach a certain significant level – I would advise to always go with more than 95 percent.

Have you wondered yet if it is still valuable for your company to run tests? If you can’t run more than a dozen tests a year then testing might not be the best tactic to grow your business and you may want to focus first on getting more traffic to your website in order to save time running tests.

How do we deal with this + How should you deal with this?

As we know where our traffic is it makes it easy to figure out based on the conversion rates that we have to calculate the number of tests that we can run on these areas. When we’ve figured this out we made a schedule to see when we do want to focus on running certain tests and when we do on others. With this information we plan up front which tests we run for example on our desktop sites and which ones on the mobile version.

The information provided, the number of tests we run and with that we know what our success percentage is for a certain area on a certain page. Which could help you figure out where you want to focus your attention. If you have five areas that you’d like to test but can only run five tests a month then you might want to focus on the area that will have the highest impact and put all your effort in there to reach your success percentage.







Quantity versus Quality?

It’s obvious that you would be able say that we prefer running on high velocity just because of the quantity of tests. But with our setup that is not the case, as we believe in high velocity we try to make our process as lean as possible. Which makes that sometimes we can spend some more time on increasing the quality of the tests that we run. As we run more tests we do learn more and that also increase the success rates of our tests by coming up with better winners because of a better pre-analysis and/or a better hypothesis.

So how do you create a higher quality with a higher velocity?

But how does that work in real life, because if you can run at higher velocity you also want to increase your quality as that will help you gaining more success in the long run, right? It’s a fact that running a lot of tests gives you more insights. In our case we’re running around 30 to 40 tests a year on a single area of the site which gives insights into what works better: copy changes, style changes, colour changes. By doing this you’ll be able to see the results of what makes the biggest impact. If changing the copy always shows a bigger impact on the results than changing the style, then why not focus solely on testing the verbiage on the next five to 10 tests and see if you can improve it.

But that’s just the start. As you have more data available on your winners, you are also able to write a better hypothesis to find out what triggers the user to click or use a certain area in some way.

But that’s just the start. As you have more data available on your winners, you are also able to write a better hypothesis to find out what triggers the user to click or use a certain area in some way.

But that’s just the start. As you have more data available on your winners, you are also able to write a better hypothesis to find out what triggers the user to click or use a certain area in some way.

“Changing the size of the button will increase the number of clicks”

“Changing the size of the button will increase the visibility, as users will only see the first 30 percent of the page which was proved in the analysis we did xx-xx-xxxx, that’s why we increased the size of the button to xxx pixels to focus more on pointing the users in the right direction”

Which hypothesis reads as though real work went into it? Pretty sure it’s the second one, right? It was more based on a real through process and somebody put time in using the old results to come up with a new hypothesis which will make it easier.

What’s next?

What can you do with this information? I hope a lot. What I want to get across is that, first and foremost, you figure out how many tests you can run and decide for yourself if A/B testing is really the most important thing to work on at the moment. What’s next is that you’re hopefully going to focus on the right task at hand– getting more traffic. Later on you can run your tests for a short period of time or you find the right ways to prioritise your resources to work on the testing program that you set up before reading this blog post.

How many tests do you run at your company? What’s keeping you from running all the tests you want?