skip to Main Content
Mobile Vs. Desktop: Why You Should Run Separate Tests

Mobile vs. Desktop: Why You Should Run Separate Tests

Is it a good idea to run A/B tests on all of your traffic at once?

At a glance, it might seem that doing so would simplify things. You would have a bigger sample size and you may be able to cut back on testing time. In fact, there are some cases where it may make complete sense to run tests concurrently on all platforms, but this is not true for all situations.

The key to testing is in the results you get – will they be a true reflection of the audience you test? Will any feedback you get be applicable to at least the majority of the test traffic?

For the sake of getting viable results, you should run tests separately on your mobile traffic vs. your desktop traffic, at least most of the time. Here’s why:

Get some quick ideas for mobile or desktop tests here

Device-specific demographics, dispositions and business goals

Look closely at mobile and desktop audiences and you’ll find different preferences, even completely different user mindsets. These may change according to each device’s interface, the situation, or location in which they are using it.

Mobile vs. desktop

Time of day also affects which device people are disposed to use, as this data from Smart Insights shows.

The premise of testing is to achieve some sort of business goal, right? Well, with such varying user modes and preferences, it is highly likely that your goals should be different between mobile and desktop.

Let’s take a closer look:

Mobile

Demographics

By nature, a mobile user is on-the-go. They might be browsing while waiting for an appointment or during their commute. Their mode of operation is to do things quickly, often with a short attention span. When they look for answers, they want them right now, without having to hunt around or take a lot of extra steps.

User demographics, such as age, come into play as well. Let’s say your ecommerce store has relatively broad appeal. You’ll still have variation in terms of age group preference for mobile, desktop, or multi-device usage, as the graph below shows:

Mobile vs. desktop

Functionality

Consider the size and operation of the mobile device. The user is faced with a touchscreen that is swipe or tap-centric. They scroll vertically, but there is limited visual real estate for information delivery. They have to tap out data-entry on a tiny interface, which is more difficult than using a keyboard on desktop.

Think back to the mindset of the user – their attention span may be limited, which means while they could keep scrolling to find information, they probably don’t want to. You will have to carefully prioritize how information appears. Features that appear above the fold on a desktop may require a lot of scrolling on a mobile device.

Mobile sites that are simply a shrunken version of the desktop site often deliver a poor experience to the user. This is absolutely a factor that could skew your test results. On the other hand, if you have an optimized mobile site that is different to your desktop site, then you’re not testing the exact same variables, are you?

Behavior and corresponding conversion goals

The mobile user can only easily access the information that is immediately in front of them. Hopping between websites with multiple tabs open on a desktop is not as simple to do on a mobile device. How does this affect what the mobile user is willing to do on the small screen?

Mobile is good for quick research, but not necessarily for carrying through a purchase or entering in credit card information. A user might add a couple of things to the shopping cart, but leave with the intention of revisiting the products later.

This is demonstrated in research, which shows that:

  • Mobile users usually hunt for something specific, then leave more time-consuming searches for desktop later on.
  • People still tend to switch to desktop to actually complete a purchase.

How might this then impact conversion goals? Here are some things that we suggest may be more appropriate to the mobile experience:

  • Signing up for an account
  • Browsing and adding products to the cart
  • Adding products to a wish list
  • Capturing an email address so that you can follow-up later
  • Registering or logging in with a social media account.
Your goals for desktop vs. mobile optimization may differ according to user preference Click To Tweet

Desktop

Demographics

A desktop user is in a fixed physical position. They may be much more likely in that case, to take their time browsing and looking for information.

If you glance back at the graph showing share of demographic audiences by platform usage, you’ll see that there is a significantly higher proportion of users in the 55+ age group who are desktop-only. Depending on your own customer demographics, this may play a role in your results.

Functionality

You have much more visual real estate available on desktop. You can display more information without the user having to scroll below the fold, therefore, while prioritizing your display is still important, it is not quite as critical as on a mobile device.

The desktop user finds it easier to enter and process information with a full keyboard at their disposal. They can easily have multiple tabs open and quickly move between them.

Behavior and corresponding conversion goals

The user potentially has more time on their hands along with a more dynamic interface. They might have more tabs open, clicking between them to make comparisons.

With the relative ease of use that a full keyboard brings, they may be more willing to go through extra steps, or fill out slightly longer forms. (Of course, form length and fields is something you can A/B test!).

Data shows us that add-to-cart rates are still higher on desktop than mobile, so prioritizing tests that lead to a final sale become more important:

Mobile vs. desktop

Therefore, suitable desktop conversion goals might include things like:

  • Completing the transaction
  • Making the check-out process easier
  • Making cart functionality more simple (for example, adding or removing products, or going from the cart to “continue shopping”)
  • Getting an email opt-in after check-out is complete.

How this affects testing and optimization

We’ve demonstrated how and why your sales flow and conversion milestones may differ based upon device-specific user behavior. Therefore, your website testing should reflect these differences.

Here are some examples:

  • The UI of mobile favors smaller sign-up forms and less data-entry requirement. However, if you require more than one field of information, you may want to test out a multi-step vs. single form. (You wouldn’t necessarily test multi-step on desktop).
  • If you find that your audience is happy to add to the cart on mobile, but tends to follow up later on desktop to complete the purchase, you could focus tests on whether or not it makes sense to capture their email address in earlier stages of the sales flow, so you can follow up with any incompletes.
  • On desktop, if your visitor stats are similar to the norm, you already know that more buyers will come through via desktop. Your primary goal might be to get the sale first, then get the email address afterwards. In fact, there is data on checkout optimization, indicating that you should never “force” people to create an account and that you should keep the flow as short as possible. Email capture might be seen as an extra step in this case (but obviously, this could be something to test!)

It’s important to note that your optimization goals should be related to the knowledge you have about your own particular traffic. While completing purchases on a desktop is still an overall norm, it is different for any individual business. Get to know your own analytics to determine where traffic is coming from and who does what.

Mobile conversion is something that is growing, as seen in the data table in the previous section. If your business is mobile-focused for conversions (think platforms like Uber), then testing should account for this. For example, perhaps you take a tip from Amazon and enable some kind of “one-click” checkout. Another example could be testing the addition of different payment methods that allow for that one click.

Uneven traffic volumes

Where does your traffic come from? Some ecommerce companies will have 80% or more traffic from mobile sources – this will immediately skew test results unless you specify tests for each traffic type.

Sometimes, a test that works beautifully for desktop traffic flops for mobile, and vice versa. For example, you might be testing a piece of copy. A lengthier, more detailed piece might work well on a desktop, taking advantage of the greater real estate available. The same copy on a mobile device may be unfavorable because the length is not suited to the format.

You can have more confidence in your test data if your test samples are essentially similar to one another. This means having consistent context for the tests.

There’s a possibility of “data pollution”

Data pollution occurs when there are factors that impact the accuracy of your data. If you’re testing mobile and desktop traffic together, their conditions aren’t the same and there’s the possibility that your data gets polluted.

For example, what if a feature that works perfectly on a desktop is buggy on iOS? If you have a high proportion of iOS users, they won’t use the feature and your findings will suggest that the feature is a poor choice. Maybe the feature simply needs fixing for the operating system of the traffic…

“Mobile” as a category may be too broad

Not all mobile is the same. For example your traffic might represent:

  • iOS vs. Android
  • Mobile phone vs. tablet
  • Other mobiles, such as Windows or Amazon Kindle.

Mobile phone versions of websites are usually different to tablet versions, with the slightly larger screen available on tablet allowing for more visible detail.

There is also data to show that each group of users has different spending habits on mobile, so you actually may want to further breakdown your tests to group the different mobile modes. For example, did you know that iPhone users buy more on mobile than Android users?

Download a few quick ideas for mobile vs. desktop tests here

Conclusion

How should you move forward with testing?

If you have the traffic capacity, then you can run tests specifically for the platform your users originate from. In any case, it is always good practice to check for anomalies between desktop and mobile in every test that you run. As you can see, there are legitimate reasons for these differences, which can provide significant data for optimization.

It’s also worth noting that there are some basic tests that you can run on all platforms. For example, when you add or remove certain elements, or when you change content, such as text, photos or calls to action.

Know your own traffic, and be clear about the goals you have for optimization on each platform. This will help you to run appropriate tests.

Back To Top

Testing.Agency has a new name: Team Croco.

With this new brand, we focus even more on CO-operating on your CRO strategy. (Get it => CRO-CO )

Of course you can expect the same speed and quality as before, but we also commit to support you in the rest of your testing program: planning & prioritising, design, development, implementation, reporting.

Our goal is for you to run a consistent A/B testing program with minimal effort.

Want to know more?

Request a free demo and consultation!