At a time when brands and advertisers are drowning in data, in a bid to understand audience behaviours and activities, are they running the risk of losing the human element by relying on algorithms to make sense of it all?
For years market research was how advertisers learnt about consumers and how to target ads towards them. However lately, such a spend has dipped. Previously, advertisers would look to bigger brands to see how market research was being handled, according to Carlos Serra, head of strategy and partnerships at Audiense.
Now, more and more brands are now focusing on matching data from a number of sources to get the richness of insight, according to reports from the likes of Nielsen and IPSO.
Liam Corcoran, vice president of ad and audience measurement EMEA at Research Now, explained that this is due to people using data to understand behaviours, and undertaking market research purely to understand the 'why'.
He said: “Before there was a lot of money spent on trying to understand people's behaviour, but technology allows you to track behaviour across multiple devices. So really [market research] is now used to understand why they behaved in this way based on that behavioural data that brands already have access to.”
Finding a needle in a haystack
Sifting through the data is not as simple as a panel or survey, as they provide a lot of information that might not necessarily provide any relevant information. Which is where the likes of Ian Anderson, principal research scientist at InMobi, step in.
Explaining that the amount of excess data is unmanageable, he said: “You can’t actually look at all that data as a human – you have to use machines and algorithms. And we are always seeing increasingly every more sophisticated algorithms to interpret data.”
However purely using algorithms to interpret data can have a negative effect on campaign feedback as Sarah Mole, digital director at Sister London, explained.
Following an influencer campaign the agency worked on in London, when Mole looked at the insights her data provided, it looked as if they sentiment was predominantly negative.
Upon further inspection, the event itself had received positive feedback and it was in actual fact the travel to the event that had been negative as people complained about having to get up early and they noted they were tired.
She said: “We are trying to get the balance right between automation and humanisation of the analytics we get. It’s about interpreting automation to a point because it is impossible to do so [entirely] humanly. You start off with the automation and then you use humans to manually go through those results to actually take the interpretation of the data.”
Agreeing in part, Serra noted that it would be a waste of resource to always have a human analysing data results from automation for the likes of product recommendation. However at the end of the day, algorithms and machines cannot understand sentiment and emotion.
"Don’t expect data to give you the answer," he said. "Expect it to help you build the right hypothesis and to continue investigating."