Misinformation Youtube Vodafone

Big brands pull ads as YouTube battles conspiracy and misinformation videos

Author

By Chris Sutcliffe, Senior reporter

February 14, 2022 | 7 min read

Update: YouTube have reiterated to The Drum the tools they have in place to catch misinformation, stating: "We also have policies against certain kinds of Covid-19 misinformation and will remove videos when flagged to us. To date, we have removed over one million videos related to dangerous or misleading Covid-19 information since early February 2020."

Azamat youtube

YouTube has run afoul of research that shows it still serves ads against misinformation

YouTube is on the receiving end of a fresh round of criticism about brand safety lapses on its platform after reports emerged it has served ads for major brands alongside misinformation and conspiratorial content.

According to The Times, brands including Disney, Amnesty International and Vodafone have inadvertantly advertised against some of the misinformation and conspiracy videos present on the platform.

Amnesty International was found to be advertising against one video titled ‘Boris blames Keir for letting off evil BBC Saville (sic) (clapping hands emoji) Huge Starmer Fail,’ which was posted shortly before the attacks on the UK Labour leader. The Times states it has since pulled ad spend from the platform.

Vodafone was found to be advertising on a channel that included videos featuring 5G conspiracy theorists.

Smile Direct Club HelloFresh and Quooker have also unknowingly appeared against anti-vaccine videos.

A spokesperson told The Times it had taken down videos highlighted as violating its policies and removed adverts on other videos.

“Our teams are working around the clock to quickly remove violative content, including Covid-19 misinformation and content that harasses or threatens individuals,” they said.

“Additionally, we ensure that we are connecting people with authoritative information about the virus and vaccines. We do not allow ads to run alongside unreliable and harmful claims and take action when our policies are breached.”

YouTube has made efforts to curb the brand safety concerns from major advertisers following a series of brand safety issues such as terrorist content, paedophilia and climate denial misinformation. This largely relied on artificial intelligence (AI) and changes to its algorithm to help weed out problematic content.

But last year it announced it was reprioritizing human moderation on its platform; not to help to flag disinformation, but to ensure that its automated tools did not unfairly penalize fringe cases. It was an effort that went hand-in-hand with industry-wide calls to avoid blunt tools such as blacklisting when it comes to ensuring brand safety across the industry.

Now, however, the pendulum seems to be swinging back the other way in favor of greater control on which content advertisers can choose to appear against. A report from IPG and Magna in September 2021 found that this effort is hindered by vague and inconsistent misinformation policies across platforms including YouTube.

At the time, Joshua Lowcock, global chief brand safety officer at Mediabrands network agency UM Worldwide, said: “While some platforms have policies on disinformation and misinformation, they are often vague or inconsistent, opening the door to bad actors exploiting platforms in a way that causes real-world harm to society and brands.”

That is further exacerbated by the lack of agreement on what counts as legitimate or illegitimate content across the various social platforms. In the run-up to the 2020 US election YouTube chose to deprioritize conspiracy content in search results, while Twitter and Facebook (belatedly) began labeling any misinformation.

Never wholly safe

UK marketing director for Google Ads Nishma Robb has previously stated that the tech giant might never be able to guarantee “100% safety” for brands on YouTube, following questions about the extent to which inappropriate comments could be left on videos featuring children.

“I don’t think that’s the reality of the platform. The reality is the internet has dark pockets and our job is to continue to work hard to ensure we have the machines and processes to remove harmful content.

A Mozilla study published in July 2021 found that YouTube’s algorithm could recommend videos that went against its own policies, with videos promoting misinformation being the most frequent offender. The report states: “YouTube today provides no transparency into how it defines and treats borderline content. YouTube needs to step up and address this transparency gap.

“YouTube should expand its transparency reporting to include information about how the platform specifically defines ‘borderline content,’ the content moderation methodologies it applies to such content (e.g. downranking; deprioritizing), and aggregate data that can help assess the issues associated with this category of content on the services (e.g. how many times YouTube recommends borderline content and the overall amount of such content on the platform).”

Once again the issue brands have with YouTube is around its role and responsibility when it comes to protecting them from appearing opposite harmful content. Advocates including Imran Ahmed, chief executive of the Center for Countering Digital Hate, have alleged that it is a financial decision on YouTube’s part to allow lax standards.

YouTube, which is still the world’s biggest video platform, generated $7.2bn in advertising revenue during the third quarter of 2021.

Previous external efforts to force change at YouTube include a very high-profile pulling of spend on its platform from huge brands in 2017. Since then the vast majority of the advertisers have quietly resumed spending on the platform, and even at the time a panel of experts agreed the boycott was more about the optics than a tangible decision from the brands.

Misinformation Youtube Vodafone

More from Misinformation

View all

Trending

Industry insights

View all
Add your own content +