Media Measurement EU Referendum Research

If the polls were wrong on Brexit does it mean marketing research is wrong too?

By Will Hanmer-Lloyd, Behavioural planner

July 1, 2016 | 5 min read

At 10pm on Thursday 23 June YouGov released a recontact survey showing that Remain would win the EU referendum 52 per cent to 48 per cent. People had obviously feared change as they entered the voting booth, and before the first results Nigel Farage gave a semi-concession speech and many Londoners and global observers breathed a sigh of relief.

Will Hanmer-Lloyd

Only, they turned out to be wrong.

Earlier that day many polling companies released surveys from just before the election showing Remain would win, including an online survey from Populus saying Remain would triumph by 55 per cent to 45 per cent. In 2015 nearly every survey research company said that the UK would have a hung parliament. They were wrong. The question we are now forced to ask is, are these the research companies and methodologies that we want determining whether our marketing budgets have been successful?

There are three main reasons why we might not. Firstly, the respondents who answer these surveys are not representative. Online polls require self-selecting panels, meaning they exclusively capture those who openly want to give their opinion. We all know people who are over-enthusiastic to give their opinion (we usually meet at least one over Christmas dinner), and they are not representative of all of us. Phone respondents also skew towards those who would say “yes, I would love to spend 15 minutes answering your questions over the phone”.

Secondly, academic research and behavioural science show that even representative samples cannot give accurate answers about what they believe, what they do, or what they intend to do. In a 2005 study academics recorded 800 respondents who said they drank a certain drink brand daily. 42 per cent of them then didn’t consume the brand once in the next week. In their 2012 paper 'Reality check in the digital age: The relationship between what we ask and what people actually do' Alice Louw and Jan Hofmeyr tracked actual sales to brand metric questions. They found very little to no relationship between an individual’s response to prompted awareness and unaided awareness questions and whether they were more likely to buy that product. Alongside these, econometrics is generally never able to include brand metrics within a sales model, suggesting limited correlation.

Behavioural science has shown that we change our purchase decisions and behaviour based on hundreds of factors that we don’t recognise. We value cookies in an empty jar more, we drink less if others are, and we pick the default option. All without understanding why we have done this, and then poorly post-rationalising our behaviour – making us poor personal forecasters. On top of this, neuroscience shows that our unconscious mind dominates our decision making, which is why the mere exposure effect shows ads that we can’t consciously remember can still affect our emotional brain and purchase decisions. But traditional surveys only ask rational questions, aimed at finding out the answers from the much less significant part of our brain when it comes to decision making.

Finally, IPA research, across hundreds of examples, has shown that focussing on brand metrics alone is far less effective than focussing on behaviour change. They show that campaigns setting a behaviour metric are “28 per cent more likely to report a large business effect, and 41 per cent more likely to be accountable” and that “in all cases where behaviour change are set the effectiveness success rate is 50 per cent, compared to only 11 per cent when only soft objectives (eg attitude or awareness) are set.”

This means if we want to ensure our campaigns are successful, or we are to accurately measure if they are, we need to not rely on brand tracking, but always put the effort and cost in to track behaviour change. Now wherever possible this should be sales related (assuming that is the overall aim of the campaign) with some budget set aside to run AB tests, or regional testing, or using econometrics. We also have access to far greater behavioural data we can use to measure success than ever before; for instance, did users who saw our mobile ads go into our shops more than comparable other mobile users, did our website traffic increase consistently with our advertising, or was there an increase in bookers through our website, over other aggregate providers.

Some campaigns however are focussed on building long-term brand strength over time, and don’t lend themselves to immediate hard metric measurement. However, here we can at least start using methodologies that incorporate behavioural science, so that we get beyond simply asking rational questions to emotional campaigns. There are many brilliant research companies incorporating neuroscience, facial recognition, avatar based response, ‘FaceTrace methodology’ and time pressure responses to capture our unconscious attitudes.

As an industry we spend billions on advertising, and the independent research suggests we need to redouble our efforts to find the most effective way to track the success of that spend. Fortunately, there has never been a time when there are more new, and better, options.

Will Hanmer-Lloyd is a behavioural planner at Total Media

Media Measurement EU Referendum Research

More from Media Measurement

View all

Trending

Industry insights

View all
Add your own content +