Are you too small to A/B test effectively?
Most ecommerce companies now realize the importance of A/B testing for conversion rate optimization, in fact, some can’t wait to get started. We’d all love to have actionable insights that grow our bottom line, right?
The caution here is that it’s not always appropriate to be A/B testing. If you’re currently too small, you can end up with skewed results, or results that aren’t statistically significant enough to make reasonable assumptions from.
The last thing you want to do is head down a path that has been directed by misleading results, this is a quick way to waste both time and money. So, what should your company do if you’re currently too small to A/B test?
What does “too small” mean?
To begin with, let’s define what “too small” means in terms of A/B testing for ecommerce businesses. As a guideline, we always advise customers that you need to have a reasonable number of sales or conversions per month to perform fair tests.
What number should you have at a minimum? We’d go with 1,000 conversions per month.
In our experience, while you could work with numbers a bit lower than this, you start to head into murky territory, where your results may have inaccuracies, or alternatively, you need to take bold measures with extreme tests. When you have smaller numbers, you don’t get to see small, but statistically significant uplifts like you can with 1000 or more conversions.
Basically, with a smaller sample size, you need to see bigger impacts to prove the validity of the test. For example, let’s say you test across 500 conversions. You perform a split test that results in 230 conversions for one version and 270 for another. Is the 270 result a clear winner? Probably not – the difference is so low and the sample so small, that on another day you might get the opposite result.
There’s also a point to be made that with sample sizes that are too small, A/B testing can be an outright failure. Having failing tests is okay, in fact, it’s to be expected. We find that just three out of 10 tests are winners that produce meaningful results, even with large numbers of conversions. However, with larger sample sizes, those tests that fail still provide learning experiences.
When your sample size is too small, even those tests that appear to win might not be a true reflection of your market, or, perhaps you end up with no decisive winners at all. In any case, testing when you are too small, only to have test failures can be an expensive exercise for your company when you could be doing other things to drive revenue.“Too small to A/B test” means less than 1000 conversions per month Click To Tweet
What can you do instead?
First of all (and probably the most obvious answer) is that you need to drive more conversions in order to reach that 1000 minimum per month. For many companies, this might seem a frustrating answer – the reason you want to A/B test is to drive more conversions, right?
One way of looking at it is that, while you could spend money on A/B testing, only to find that you don’t get viable results, alternatively, you could spend that same money on marketing and activities to drive more traffic to your website. Which would seem to be the wiser choice?
There are some companies that try buying traffic in order to A/B test, but this can be a dangerous premise. A lot of the time, you’re not getting qualified traffic – those people who would otherwise fit into a description of “your ideal customer.” There is absolutely no point in running tests on people who would not otherwise be a customer of your company. Your results will not be an accurate representation of the people whom you’re really trying to attract.
While you seek to optimize through A/B testing, the testing itself will often not provide you with significant, quantifiable results as long as you still have a small number of conversions. What you can do is look to learn in other ways. You have other means available to you which can give you qualitative data, and, while less scientifically accurate, help you to make some reasonable hypotheses about driving more conversions moving forward. You could use these methods to help you optimize for traffic until you reach a point where A/B testing will be viable.
Let’s look at a few of those options:
In user experience (UX), heuristic analysis is a way of walking through the various critical functions that a user has to go through on your website, and evaluating them for usability. As an example, on an ecommerce website the checkout process might be a target for heuristic analysis, where you would check for any roadblocks to conversion.
The results of heuristic analysis are not guaranteed to be optimal as it involves experience-based techniques for discovery. Someone who is very experienced with websites and optimization will usually be able to spot possible issues very quickly, but their analysis is still based on assumption. A quick way to get some points to work on is simply to get your team together, preferably with some customers, and have them go through your website.
During an analysis, you will look for things like:
- Friction points. These can be anything that forces the customer to do something or creates a roadblock of some kind. For example, forcing a customer to have an account in order to complete their purchase can be considered a friction point, as can having long, tedious forms to fill out.
- Clarity. Is it obvious where the customer should go? Are product descriptions or any calls to action clear? Confusing menus or navigation can be sources of friction too.
- Expectations. Looking at the traffic sources for your key pages, does the page deliver on the expectations of the search traffic? What are they searching to get there?
- Distractions. Is there anything on the page that may distract people from taking the desired action?
A heatmap helps you create a picture of how people are using your website and what things seem to be the most important to them. You can view clicks, taps, and scrolling behavior, and get some clue as to the motivations of your website visitors.
Here are some of the sorts of information you can glean from a heatmap:
- The side or area of a webpage that gets the most attention
- The amount of time people spend browsing above vs. below the fold
- The direction at which people read your content
- Whether you are impacted by “banner blindness”
- The images that draw the most attention
There are tools available such as Hotjar, which provides a range of features for heat mapping and surveying. You can find out where visitors are dropping off in your funnels and look for your biggest opportunities for improvement.
User feedback is a great source of raw data, particularly if you have a good sample of users who fit within that “ideal customer” mold. Surveys can be a quick and relatively low-cost way of getting some usable feedback.
There are a number of different survey tools available to help you do this, but one we like in particular is Usabilla. This tool is specifically geared for website feedback, both on desktop and using native in-app methods for your mobile app.
Look for a tool like this that allows you to actively ask for feedback, as well as giving the option to users to offer feedback at-will. While this is not as good as a truly scientific A/B test, you can at least look for patterns in the responses you get or simply try things out based on feedback.
Google analytics (or your particular choice of analytics software) provide some great insights that you can at least make some assumptions from. There are a huge number of data points you could be looking into, but just quickly at a basic level you could learn
- Where visitors are coming from and when they are most likely to be active
- Which are your most popular webpages
- Which pages have a high bounce rate
- The keywords that visitors use to find your website
- How your visitors are browsing – the devices and browsers that they use
From these points alone you could start to make some assumptions about optimization. For example, do you have help available during the active browsing hours? What is it about some pages that make people leave quickly? Is your website optimized for the devices and browsers that people prefer to use?
While we’d love to be able to tell everyone that, they should be A/B testing right now, the fact is that’s not true for all businesses. If you don’t have the conversion numbers, then you simply don’t have a reliable means of getting good test data.
Rather than spending time and money on A/B testing too early, we suggest you work on building up your web traffic to drive more sales and making small adjustments based on other data that is more easily obtained.
Your customer is always at the heart of conversion rate optimization, so start with what you can get from the customer behaviors and feedback, and keep building to reach 1000 conversions or more per month. We’d love to help you out with some A/B testing once you get there!