Despite efforts to quash advertisers concerns around brand safety and restore trust, the UK marketing director for Google Ads has admitted the tech giant might never be able to guarantee “100% safety” for brands on YouTube.
“I don’t think that’s the reality of the platform,” Nishma Robb explained, her comments coming just weeks after advertisers like AT&T once again froze spend over brand safety concerns.
This time, worries were heightened after it was found that paedophiles were leaving “predatory” comments on videos featuring children.
“The reality is the internet has dark pockets and our job is to continue to work hard to ensure we have the machines and processes to remove harmful content."
YouTube has accelerated the launch of a comment classifier which identifies and removes unsuitable posts. Robb claimed they catch 98% of extremist or violent videos before they are seen by anyone, though admitted the AI safeguards it's put in place won't spike all potentially harmful content.
It's a figure that marks an improvement on previous AI-powered detection removal tools which at the last count purged 75% of extremist videos YouTube videos before human detection. Some 400 hours of video are uploaded to YouTube every minute.
However, despite progress, the 2% which make-it through the firewall mean advertisers are still approaching the platform with caution.
Diageo is among those still tentatively "testing" its way back to the platform on a global scale after a series of brand safety scandals. Meanwhile Sky, the UK's largest advertiser, also revealed last summer that it still hadn't resumed spend. "We only put money where we understand how it’s being spent," said the broadcaster's chief exec Stephen van Rooyen.
AT&T had only just resumed spending after the last brand safety scandal - brought on by The Times exposé into how brands were inadvertently funding terrorism - before it pulled budgets once again in light of the dangers posed by paedophiles in the comment sections.
Robb has stressed that YouTube is doing more than simply investing in machine learning to tackle the issue. YouTube has also been working with authoritative bodies like “the police, the charities and NGO’s” to collectively “understand the trends of bad actors and the things they do” within its walls, she said.
Robb's comments came during a panel at Incorporated Society of British Advertisers (Isba) annual conference on Tuesday (5 March).
A brief history of YouTube's brand safety crises
Over the past two years, YouTube has faced unrelenting questions from advertisers on the issue of ad misplacement.
More recently, the Google-owned platform was forced to disable comments on content featuring minors following fears the comments section was being abused. AT&T, Hasbro, Nestle, Disney and Epic Games were among those to temporality freeze spend, but the roots of the problem run a lot deeper.
The issue was first brought to mainstream attention back in 2017 when The Sunday Times published an investigation detailing how household brands were inadvertently “funding terror” by having their ads run adjacent to extremist and violent content. It was a front-page story caused the likes of M&S and the UK government to pull investment in the short-term.
In December the same year, the crisis rumbled on as YouTube was revealed to be serving ads against videos featuring child abuse and disturbing scenarios. The headlines prompted Mars, Diageo and Adidas to vote with their wallets. Mars returned but then temporarily pulled spend again in August later that year after a pre-roll ad for Starburst was shown before a video fuelling London gang violence.
In every instance, YouTube has pledged action against what its UK and Ireland boss Ronan Harris has described as “wholly unacceptable” and “undesirable”.
In response to recent child safety concerns, YouTube upped its use of machine-learning, which helped it to double the speed at which it is able to take down any content that violates rules.
Despite this, Robb said YouTube will “continue to use humans and human verification” to make sure “the platform is safe for users, particularly, and advertisers.”
As part of its charm offensive, the platform has also developed a suite of internal tools to combat the issue. These include strict new criteria for content monetisation and greater controls for advertisers on what they perceive as ‘appropriate content.’
Additional reporting by Rebecca Stewart