YouTube’s UK boss: ‘Brand safety is no longer top of the list for advertisers’
In wake of another ‘brand safety crisis’, which this week revealed how advertisers were inadvertently funding climate denial videos on YouTube, the platform’s UK managing director has spoken out on why brand bosses have not resorted to pulling spend in the same way they might have in years past and the changes it’s now making.
Last week, a study from research firm Avaaz found that ads from major brands including Samsung, Warner Bros, L’Oréal and Danone had appeared next to a host of climate denial videos. The findings suggested that 16% of the top 100 related videos for the search term ‘global warming’ contained misinformation. The top ten videos had averaged one million views each.
It was a scathing report, and it was covered everywhere from the Guardian and Time Magazine to trade publications, including The Drum. The headlines would be familiar to many of those same advertisers stung in 2017 when The Times ran a front-page exposé on how advertisers were inadvertently funding terrorism, and then paedophilia, when their ads ran next to illegal content that had slipped through the YouTube vetting net.
But the response from budget-holders this week has been considerably different to the way they reacted two years ago.
L’Oréal, for example, was quick to issue a statement to say it was actively working with YouTube to remove the ads from the videos that promoted climate misinformation – but, crucially, it wasn’t pulling spend altogether.
Speaking to The Drum, YouTube’s UK managing director Ben McOwen Wilson suggests the feedback from brands to this issue has shown how far the platform has come in dealing with problematic content.
“We’re being clear on what our policies are, and then removing content that breaches those policies, and aiming to remove it before it gets any views – that is our goal. And we're definitely progressing there,” he says, referring to its latest Transparency Report which boasted that Google caught 84% of content that violated its policies, and of that number the vast majority of videos (85%) were taken down before anyone had seen them.
But some policy areas are easier to define, and therefore police, than others. Videos promoting terrorism or exploiting children are illegal and obviously fall into the former bracket. Content around the climate change debate, on the other hand, is trickier to manage.
“The areas where it's always been hard is where there is context or subtle nuance that is needed to determine if something's one side or the other of a policy line," McOwen Wilson continues.
"And that's where being very open and transparent with our advertising partners – whether they’re brands or agencies – has been really helpful in them understanding that even if YouTube finds itself in a position where somebody feels we're not doing something they would appreciate with their brand, that they understand the system that we've got in place and ideally they will understand ‘okay, why is it you're not catching it’,” he says, suggesting this is why he’s not seen the backlash and pulled spend from brands as he did in 2017.
Currently, Google does not have a stance on climate change denial videos, though it does have a policy around videos that contain obvious misinformation, such as ‘flat earth’ content.
This includes a ‘classifier’, launched in in the second half of 2019, which actively seeks out videos that contain information that is misleading and ensures it’s not served to people as recommended viewing. This system effectively buries offending videos among the 500 hours of content that’s uploaded to YouTube every minute. Videos targeted by this tool aren't content Google would remove altogether, but if someone did want to find a film about flat earth, it means they'd need to carefully search for it.
When it comes to climate change topics, advertisers have a choice about what their content appears adjacent to and can manually block their ads from running next to certain genres.
“So that's the area of failing,” says McOwen Wilson. “It’s an area where, in 2020, there will be some discussion with government. And I said this very publicly last year that I welcome that discussion as the YouTube managing director and, more than that, as British citizen [...] because that's where the debate has reached, society and the open Internet needs to decide where we want to draw some lines.
“The debate that we find ourselves in is where should there be clear lines beyond laws that currently exist? Because the laws that currently exist will also apply to us online, but where are the other areas where you want clear lines drawn? And what are those around?”
Contrary to the belief that Google is shying away from making those decisions itself, McOwen Wilson stresses the company knows that, for business, any uncertainty around monetising climate change videos is “worse than knowing what the rules are” – but it doesn’t believe in setting those rules without wider consultation from other industries.
“And, certainly in the conversations that I have, not just with economic regulators and elected officials or brands or our content creators, people do recognise first and foremost, the value that openness brings as being far greater than the downsides,” he adds.
‘Brand safety is not the biggest thing marketers are interested in’
But McOwen Wilson says that beyond this latest challenge, YouTube has seen a shift in how advertisers approach the problem of brand safety.
Many brands have bolstered their internal knowledge with hires like 'brand safety officers' and 'chief media officers' while others have improved their own processes rather than relying on agencies and platforms to keep them safe, like Diageo which set up a Trusted Marketplace programme in wake of 2017 to better control where its online ads go (the drinks maker is now tentatively testing a return to YouTube).
Google doesn't break out the performance of the video platform's advertising business, but the company stated in its third-quarter earnings that it saw a 17% increase overall in ad revenue to $33.9bn, largely driven mobile search and YouTube. Meanwhile, a survey from GumGum and Digiday last year revealed that though brand safety is a concern, only 60% of ad industry professionals would say it’s “serious”, down from 90% in 2017.
McOwen Wilson believes YouTube has won advertisers around over the past two years and the policies and protections it’s put in place – from AI moderation systems to increasing the number of humans on hand to intervene – have made a considerable improvement to the trust brands place in it.
“Certainly, with brands and with agencies, the topic of brand safety is nowhere near the top of the list for them at the moment with us,” he claims. “It is a hygiene factor that they demand three years ago that we needed to raise our game to deal with and they are reasonably happy that we have made progress.
"Critically, we continue to update them on what it is that we have done and not just, ‘oh, look, it's all gone away’ but that what’s going away is the result of us really, really thinking how we can bring all of our tech, all of our policy thinking, and then increase staffing to bear to make sure that – not just from a brand's point of view, but actually from a user's point of view – that the content that you see and are exposed to on the platform is appropriate.”
However, many brands are determined to keep the pressure on.
This week, at the Davos summit in Switzerland, global advertisers including Mars, P&G, Adidas, Lego and Unilever outlined a plan to suffocate harmful content online by ensuring those spreading it have “no access” to advertiser dollars.
A three-pronged strategy has been set to prevent media investments from fuelling the spread of content that “inflict damage on society” on YouTube and Facebook.
Facebook today (23 January) has been criticsed for the spread of conspiracy and climate change denial content linked to the devestating fires in Australia. A BuzzFeed News investigation found that far-right, fringe and conspiratorial pages were seeing unusual success by spreading content that misdirected blame away from climate change. In repsonse, Facebook said it was focused on "removing content and accounts that violate our policies, reducing the distribution of misleading content, and informing people when they do come across misleading content" without specific mention of where climate change denial fits in its policies.
Facing the same issues as Google and Facebook, after convening at Davos this group of advertisers must now reach a consensus on what harmful content is and whether climate change denial will be a topic they deem of big enough concern to push the platforms into demonetizing it altogether.
This is the first article that will appear in The Drum following its conversation with YouTube's Ben McOwen Wilson. A second will run next week, outlining his plans for the platform in 2020 as his role shifts from managing Europe to the UK market.