Three Whiskey

We're Three Whiskey - the performance marketing agency that blends digital expertise, behavioural insight and brand understanding.

London, United Kingdom
Founded: 2015
Staff: 35
More

Skills

SEO
Paid Media
Paid Search
Paid Social
digital strategy
Research & Analytics
Content Creation
Social & Content Strategy
Social Media Content

Clients

Pfizer
Nestle
Lintbells
Pion
Countrywide Plc
Boehringer Ingelheim
Pronovias
PUBG

Sector Experience

Entertainment
Property
Pharmaceutical
Healthcare
FMCG
Charity
Fashion
Luxury
Less

This promoted content is produced by a publishing partner of Open Mic. A paid-for membership product for partners of The Drum to self-publish their news, opinions and insights on thedrum.com - Find out more

Mistakes to avoid when A/B testing

by Rose Tero

15 September 2020 14:51pm

Used as a way to optimize website conversion rates, A/B testing is an experimental method where one variable, such as the webpage’s copy, size of a button or timing of a triggered offer, is split-tested against the original webpage (control).

The purpose is to determine whether the variant performs better than the control webpage by delivering a better conversion rate.

‘Conversions’ can be several things, depending on the target behaviour the marketer is interested in generating. It might be social shares, newsletter signups, E-Commerce sales... anything that will improve your website’s commercial performance.

These ‘conversions’ can be several things, depending on the target behaviour the marketer is interested in generating. It might be social shares, newsletter signups, E-Commerce sales... anything that will improve your website’s commercial performance.

A/B testing is a relatively easy process to understand, but sometimes its simplicity can be misleading when it comes to running an experiment. A marketer might plan and run a test, not realising that the results are not quite what they seem. In this instance, it’s easy to make a change to a website that might actually harm conversion rate.

Luckily, there are a number of things you can do (or avoid doing!) in order to make sure you yield the best and most insightful results from your A/B testing.

1. Use data to inform your hypothesis

A major pitfall that some fall into when A/B testing is guessing at what changes will improve your conversion rate. There are established UX best practices to follow, and broken UX should obviously be fixed, but specific UX optimisations should be informed by website analytics data and/or user testing.

Analytics data can be used to identify problem areas with a website and generate split testing ideas such as:

Website conversion funnel or form abandonment (e.g. checkout steps)Landing pages with a high bounce rateChannel-specific traffic drop off (e.g. poor Paid Social performance)Device-specific drop off (e.g. poor mobile experience)

If analytics data doesn’t uncover potential issues with the website, then a marketer can turn to user testing. User testing is a type of primary research that provides individuals with a list of actions to perform on a website to observe how easily they complete them and identify friction points. Examples of these actions are:

Search for product A on the website and add it to your basketPurchase the cheapest service offered by our brandFind information about our brand that you would want to know before purchasingExpecting small changes to have a big impact

The impact of a small change is often overestimated, and it’s rare that changing the colour of a “submit” button or switching a couple of words in the website copy will significantly impact your conversion rate in an A/B Test.

The exception to this rule is high traffic, high transaction websites, where tiny shifts in conversion rate can amount to significant returns. For most websites with average levels of traffic, marketers have to accept that Conversion Rate Optimisation via A/B testing is an ongoing process of continuous improvement and not a silver bullet.

There are plenty of A/B testing platforms available, including Google Optimize, Visual Website Optimizer and Unbounce. These platforms operate as black boxes into which marketers configure their experiment and input traffic, with a result being delivered at the other end.

2. Verify your split testing platform’s integrity

There are plenty of A/B testing platforms available, including Google Optimize, Visual Website Optimizer and Unbounce. These platforms operate as black boxes into which marketers configure their experiment and input traffic, with a result being delivered at the other end.

But how can we trust that the platform has implemented our experiment exactly as planned when we can’t see the inner workings? AABB testing is a common technique used to answer this question by testing the platform and it can be applied in couple of ways.

For a low traffic website, an AA test can be run before the AB Testing experiment. This is achieved simply by testing the existing web page (control) against itself. The platform should not find a winner because the control (A) and the variant (A) are the same. If the platform finds a winner, then that indicates a platform or experiment configuration problem.

Larger websites can run AA/BB tests for their A/B Testing strategy. As website traffic is split into four groups rather than two it requires a decent amount of traffic to drive statistical significance (a genuine result). AA/BB Testing involves running two controls (AA) and two variants (BB) side by side. The results between AA and BB should be consistent. If any one variant performs significantly better or worse than the other, then that might indicate a platform or experiment problem.

3. Don't test too many changes at once

It can be tempting to test more than one change in an A/B Test, and there are instances where this is the only approach available. For example, when a website redesign is happening for branding reasons, or significant functionality needs to change on the website in one go.

Marketers might feel the need to make many or more significant changes to low traffic websites in order to force a difference in user behaviour. This is a valid strategy, although it requires caution because the positive conversion uplift, for example of three changes to a webpage, might be completely undone by a fourth change to deliver an inconclusive result.

Your experiment needs to run for an appropriate amount of time with enough users so you can make sure you’ve reached statistical significance.

4. Don't be impatient and declare a winner too soon

Running an experiment can be exciting in that it promises to improve the performance of your site and lead to an increase in sales or other conversions, but don’t use that as an excuse to rush the process.

Your experiment needs to run for an appropriate amount of time with enough users so you can make sure you’ve reached statistical significance. Ending experiments too early means you might be making changes to your website that harm conversion rate, which defeats the entire point of A/B testing.

Be sure to factor your testing time into your A/B plan at the very beginning and you won’t feel rushed or tempted to cut it short too early.

A/B testing is a decisive tool in your CRO arsenal, but you need to make sure you’re using it to its full potential. Starting with aninformed hypothesis and rigorous testing controls, you’ll have the best chance of reaping the rewards of this valuable Conversion Rate Optimisation practice.

Tags

testing
website design
SEO
data
data analysis