skip to Main Content
Here’s What You Can Learn From “Losing” Split Tests

Here’s What You Can Learn from “Losing” Split Tests

When it comes to testing, we’re all naturally averse to that word “fail.”

It seems to be built in from when we were young – a “failed” test means that you didn’t do something right or you don’t have the knowledge required to succeed.

Is this the case with A/B testing too? We’d argue that no, while you might have tests that are “losers,” that doesn’t mean that you’ve failed. In fact, a losing test doesn’t necessarily mean that you don’t have knowledge – you just learned something from that test, right?!

Failed or inconclusive tests aren’t a reason to give up on A/B testing, in fact they can provide you with fuel to keep testing and optimizing your website. Let’s take a closer look:

Get 3 practical examples of learning from losing tests here

Expect to strike losing tests

In the course of our work here at Team Croco, we find that three out of 10 tests tend to be “winners”, meaning that they produce statistically significant and clear results. This means that 70 percent won’t!

For some other prolific A/B testers, this “fail” figure is even higher at 80 – 90 percent. The bottom line is that if you implement any kind of A/B testing program, you can expect to have a relatively high number of failed or inconclusive tests.

The four types of test results

Results from A/B tests fall into one of four categories:

  1. A positive result. This means that one version clearly beats the other and your testing is seen as a success. For example, one CTA gets 40 percent better results than another.
  2. A negative result. This means that the version you hypothesized should be better does worse than your current version. This is usually seen as a “fail.”
  3. A neutral result. Both version performed either the same, or with no significant difference. This is usually seen as a “fail” too.
  4. The test was invalid. This is a failure of the testing methodology, meaning that the data won’t be of any use. For example, what if you didn’t test mobile and desktop traffic separately, and there was a bug with the iOS version? Your test results would be skewed.

Now, let’s talk a bit about what “failing” actually means. If you are going to senior stakeholders in your company with results #2 – #4 as above, the chances are they will view these as negative things. There will be discussions about whether A/B testing is worth it, and whether they can see ROI from the exercise. They see that, as there was no action resulting from the tests, then this must mean the tests have no value.

This isn’t the case. Out of those potential “fail” results, we’d see poor testing methodology as a genuine failure, while the other two instances did provide a result, it just perhaps wasn’t what you were looking for.

Consider this, from an Harvard Business Review article by Kaiser Fung, who estimated that 80-90 percent of the A/B tests he had conducted were “failures”:

“The culprit is not the testing program but the misuse of the word “failure.” Of the tests that I led, I consider 80 to 90 percent successful because they either lead to a change in approach or confirm the current approach. Far from being a waste, confirming you’re on the right track avoids unproductive use of funds while re-directing the team’s energies to more useful projects.”

When you test a specific hypothesis, you never really “fail.” In fact, sometimes you can learn even more from tests that didn’t pan out as expected.

Split test didn’t work

Is that A/B test really a “fail?” Sometimes we can learn more from those tests Click To Tweet

Optimization is a learning process

It’s important to expect website optimization and any A/B testing program to be a learning process. It’s not so much about creating lists of things to test and hoping to see some improvement; it’s a systematic process of testing based on the data that you have gathered.

Whether your test results are negative or inconclusive, you can still learn many important things:

You learn more about your users

Every test “loser” is still an opportunity to learn more about your users. Who are they exactly and what preferences do they show? How do they interact with the elements on your site?

Testing is about making incremental improvements to the user experience, but to understand that, you really need to have deep knowledge of your users. If you’ve gathered good user data and created detailed profiles, you can make some reasonable assumptions, but you’re only going to prove (or disprove) those through testing.

For example, you might assume that certain language will appeal or a particular layout for the sales funnel. Being proven wrong on any of those assumptions isn’t a bad thing – it means you have a better idea of what to try next time!

You get ideas for further testing

Sometimes you get a result that is too close to be conclusive, but provides you with enough to help you design the next test. You analyze your treatment and results, then improve your hypothesis, and test again.

A/B testing is an iterative process. The odds of you solving the problem you set out to solve on the first try are actually fairly slim. Remember, if you actually knew exactly what was going to work, there probably wouldn’t be a need to test in the first place.

Look at your test execution. Let’s say you’re trying to test different headlines for effectiveness and find no difference between two versions. You might assume that clarity is not an issue, but were your changes really clear enough? It’s possible that a new test, this time ensuring you use the voice of the customer, may produce different results.

You learn about potential reasons for test failures

Test failures help you to examine the potential reasons behind them and learn for next time. For example, some common reasons for a negative or inconclusive result might include:

  • You don’t know your users very well
  • You took a bit of a “shot in the dark” Perhaps your testing choices were a bit random, based on something you read somewhere
  • You don’t really have a clear goal or hypothesis for testing
  • Your test methodology is faulty
  • You gave up too soon! It’s important to test a variation exhaustively so that environmental factors (time of day, day of the week…) are accounted for.

What to do with inconclusive results

Inconclusive results can be disappointing, particularly when you’ve followed best practices for A/B testing and ensured that your methodology is good. What then, can you do about those inconclusive results?

Look at segments

Here’s another opportunity to learn. Look into specific segments in your data to see if certain segments are swaying your results. For example, it’s possible for the gains of one particular segment to cancel out another. Perhaps (if it were possible), those segments should have been tested separately in the first place.

On an ecommerce site, you have brand new users, people who have subscribed but haven’t yet bought anything, and returning customers who have previously bought things. These can naturally form different segments with different preferences. If I came to your site for the first time, I might require longer explanations or descriptions, whereas if I were returning, I might already be familiar and not need the longer description. When lumped together, these groups might cancel each other out.

Split test didn’t work

 

Keep testing

Testing velocity is an important trait for successful optimization of a website, however check your assumptions first. If your testing was based on trying to validate someone’s opinion or, perhaps based on something you read on the Internet, you might be best to drop it and move onto something that is based on your own website data.

If you’re worried about the reaction of stakeholders in terms of ROI and characterizing testing as a “failure,” it’s a good idea to ensure that everyone is on the same page in the first place. Getting everyone to agree that testing is about learning can be a good way to take the angst out of “fails” and take something productive from them.

Get our practical examples of learning from failed tests here

Final thoughts

Don’t be put off by testing “fails,” in fact, perhaps it is better to change the narrative around what a “fail” actually means. Poor testing methodology might be considered a true failure (although, of course you can still learn from that). On the other hand, disproving a clear hypothesis is still a result that you can use.

How will you take what you learned from the negative or inconclusive test, and use it to iterate new testing? Remember that testing is a process, and in this way, every test you do is providing you with usable data.

Above all, keep iterating and testing regularly. Optimization is a process that can adjust over time – we’re not still using the exact same sales flows as we did in 2012, so don’t stop just because you think you’ve found something that is the “most optimal.”

Back To Top

Testing.Agency has a new name: Team Croco.

With this new brand, we focus even more on CO-operating on your CRO strategy. (Get it => CRO-CO )

Of course you can expect the same speed and quality as before, but we also commit to support you in the rest of your testing program: planning & prioritising, design, development, implementation, reporting.

Our goal is for you to run a consistent A/B testing program with minimal effort.

Want to know more?

Request a free demo and consultation!