We’ve spent a number of years A/B testing and have learned a few valuable lessons along the way.
The success of your A/B tests is only as good as the time you put into proper preparation and setup. Testing is no “quick fix” to a pressing problem, rather it’s a considered process that needs a decent time commitment. You also need to know what makes A/B testing effective or not.
Over time, we’ve found that these “golden rules” help to lead to effective A/B testing:
#1. Know what you’re testing
This sounds like a very obvious thing to say, but surprisingly, clearly defining the test is often left out. You need to have a well-crafted hypothesis with clear cause and effect outlined, as this helps you to narrow the scope of your testing.
When we say “cause and effect” with regard to your hypothesis, the cause is the element you are changing, while the effect is what you expect will happen as a result of the change.
Sometimes people write poor hypotheses that don’t outline either of these things. For example, they might say something like “test shopping cart pages.” A more specific hypothesis would be something like; “a single page checkout will decrease cart abandonment among first-time users.”
When you craft a very specific hypothesis, it helps you to look more objectively at the scope of your testing and the overall soundness of the testing idea. For example, you might notice that an idea seems weak once you have it spelled out, that you’re trying to measure the wrong things, or that you need to get even more narrow with your target audience.
Be very clear about the needle you are trying to move – this will help you to come up with the best metrics. We wrote about how to generate good test ideas here.
Clearly defining your A/B test purpose will help you to assess scope more objectively Click To Tweet
#2. Establish baseline data
You’ve got to have a good grasp of what your current state is in order to understand whether your test has made an impact. It’s important to gather baseline data for this reason.
You can get this data from a number of different sources. One of the most obvious is your Google Analytics. Look in detail at the different factors that go into whatever it is that you’d like to improve. It’s important to notice the little things because sometimes those are where you see a big difference. For example, if you only gathered data on the total traffic numbers, you might miss things like where that traffic came from, or whether they were new or returning visitors. Anything like that can have a bearing on your tests.
Other sources of baseline data include things like customer surveys and user testing. If you have specific feedback or observations to focus on improving, these can also be a good place to start. We talked about some of these sources of data when we wrote about what to do when you are too small to A/B test.
#3. Know how to measure results
Plan ahead to know how you will measure your test results and where you need to get the data from. One thing that gets companies tangled up is that their data often has to come from multiple sources, rather than simply Google Analytics.
Sometimes you might have a lot of your systems integrated so that data is pulled together, but this is often not the case. This means you need to plan how and where you need to pull data from.
A/B testing will only give you the following results:
- The control wins
- Neither wins – there is no difference
- The variation wins
- The test was invalid – you messed something up with your methodology.
In any case, you need to know exactly what those results will look like and the metrics that indicate the result.
#4. Limit the elements you are testing
If there’s one testing mistake that can really send you off-track, it’s testing too many elements at once. When you make a lot of changes, how do you know exactly what affected your results?
We saw this played out in an example that was hailing the results of changing a button color to bright orange (here’s why button color isn’t the best A/B test). The reality is, if the control and the variant have too many differences, you can’t say which variables were important.
Unless you’re wanting to do some kind of radical test on a website that is severely underperforming, we’d limit the elements you change so that you know what made a difference. A very poorly performing website is a different story – sometimes you need to make radical changes to see rapid results.
#5. Create a good variation
This follows on from that last point – the classic A/B test measures version A against version B to determine a winner. You need a control version and a variation.
The key here is to know what you’re trying to accomplish. For a company that has been testing for a long time and has honed and optimized for a while, it might make sense to start looking at smaller elements. If you haven’t been testing, and especially if you’ve been underperforming, it makes sense to start with things that might represent a greater uplift in the metrics you want to improve first.
So, going back to that point in the last section about creating a more “radical” test on a website that is underperforming, you might start by creating a whole new site layout. Your control is your old version and you’re looking to see if the new version makes a marked improvement. This might seem to go against that last tip about limiting the elements you change, but it makes sense if you’re trying to turn around a failing site in a more decisive way. If that new version is the winner, you then look at fine-tuning elements of the page. The idea is that early on, you test against a markedly different approach, then hone in on the details.
#6. Calculate the test duration
To get accurate results from A/B testing, it’s crucial that the test is run for a suitable period of time. A key mistake companies often make is to pull the test early, especially when they see an early result that points to a winner. The problem is that over time, those results can be smoothed out and you may not have found a winner at all.
Calculating minimum test duration does take some working out. It’s not something you just pull out of the air. You need to know the following data points:
Your current conversion rate
You desired uplift
The number of variations you are testing
Your average daily visitors.
Generally speaking, the higher the uplift, the more variations and the lower your average daily visitors, the longer the minimum testing period. Unbounce created a handy, automated calculator for test duration, which you can find here.
Effective A/B testing isn’t a one-off effort, it means implementing a program of regular testing. When you get test results, you analyze and re-test. Sometimes you’ll find that a small change completely alters results, or perhaps there was a factor you didn’t account for that makes a test appear to be a loser.
The key takeaway is that optimization is a learning process. You can’t expect to get the best from testing if you’re not prepared to keep iterating or re-testing. There are many things you can learn, even from losing tests.
#8. Get testing help
What goes into successful A/B testing? It’s one of those things that sounds easy but is much more complex in reality. For example, you need people who are proficient with testing methodology and setup, graphic design, UX and UI design, and more. It takes a lot of time and effort to set tests up properly.
If you don’t have the bandwidth to commit to testing within your own team, then getting help is a better idea than muddling through. A service such as Team Croco can take care of the testing elements and reporting for you, leaving your team to get on with their primary roles.
A/B testing isn’t something you quickly throw together, not if you want to get results that are of value to your company.
Over time, we’ve learned some “golden rules,” as outlined here in this post. Being very clear about your purpose and what you’re actually testing comes out at the top of the list. This ensures consistency with your approach and helps to narrow your scope.
All that said, you could implement testing yourself following these guidelines, but it is a lot of work to do properly. If your company could use A/B testing help, request a demo with us here.