Don’t miss your biggest wins
– why A/B test results need device-specific analysis

We recently ran a really interesting test for a VPN site, which showed exactly why you have to be careful with taking overall test results on face value. The test was simple –  adding an alert bar across the top of the site – like the one below: 

It’s typical to have this kind of thing across VPNs – so we expected it be a slam dunk to test this across the site.  It wasn’t!  

We put it on desktop, we put it on mobile.  The traffic split was 50:50.  

What was amazing about this was that on desktop this DROPPED the conversion rate by about 20%, however on mobile it INCREASED the conversion rate by 33%.  

Changes having completely opposite effects on mobile and desktop is something we’ve also found previously for another VPN site.  And it’s not only SaaS sites – for  one of our eCommerce clients, David Austin Roses, many tests have given significant improvements on desktop but not done so well on mobile. 

 

The danger

By breaking down into desktop and mobile, we KNEW that adding the bar on mobile would increase the mobile conversion rate by 33%.  That’s huge.  

We also KNOW not to add this bar to the desktop site.  

Traditionally people look at the overall results of a test across all devices and only break them down by desktop or mobile when they think there’s a usability difference.  

So if one goes up and the other goes down, they’ll think the impact is flat.  And not make the change.  

In this case they would have not implemented the new change on any device and missed out on a 33% increase on mobile.  

 

Think back to losing or flat tests you’ve run on your site.  Did you look at the breakdown?  Are you sure you wouldn’t increase conversion by running these just on one type of device?

 

But why are the results sometimes different on mobile to desktop?

 

1. People are in very different buying positions when they’re on a desktop device.  

Some products and services are too complex or expensive for people to complete their purchases on mobile. For example, for one of our clients, who offers a B2B service, users might do some initial research on mobile, but most of the purchasing is done on desktop. Similarly for high-ticket B2C items, often users are reluctant to actually complete the purchase on mobile. For others, however, like VPNs there is a more immediate need and lower cost barrier so many purchases are made on mobile

2. Different devices are used by different audiences.  

Younger audiences are more likely to be using mobile than older audiences and so rather than thinking about desktop and mobile as different ways of viewing your site, instead think about them as ways to differentiate between your audiences.  

3. Time, screen space and distractions differ between devices. 

Typically on desktop users have more time to appreciate what’s on the site.  They can read, they can research.  On mobile you need to be really careful not to distract your users. They won’t read for so long and have a very limited screen size. You really need to identify what the key messages are (use a heatmapping tool like CrazyEgg to help with this) and make sure these are the ones users see and read. Mobile users are more likely to be dual-screening and have shorter attention spans, and the page needs to account for this.

Going forward

When you’re designing your A/B tests, make sure you carry out your research across mobile and desktop and break down your analytics by device so that you understand how these are used differently for your products.  

When you run your A/B tests, make sure you analyse the test results by device for every test.  This way you’ll get all of the upside….and none of the downside. 

To find out what
we could do for you:

Get in touch
Get in touch