Media Planning and Buying Marketing Brand Safety

Collective brand safety progress of Facebook, YouTube and more laid bare

Author

By Rebecca Stewart, Trends Editor

April 19, 2021 | 6 min read

The combined progress made by Facebook, Instagram, Twitter, YouTube, TikTok, Snapchat and Pinterest in tackling issues such as hate speech and explicit content has been laid bare for the first time.

A fresh report from the World Federation of Advertisers (WFA) has ranked brand safety platforms like Facebook

A fresh report from the World Federation of Advertisers (WFA) has ranked brand safety platforms like Facebook

A fresh report from the World Federation of Advertisers (WFA) has ranked brand safety across all seven of the major social media platforms. Why? To help bring transparency to the media supply chain and helping marketers decide where to spend their precious ad dollars.

Published by the Global Alliance for Responsible Media (GARM) – the trade body’s cross-industry initiative which seeks to address how advertising props up harmful content online – the report looks to provide a common framework for the entire ad industry.

Set to be updated yearly, the document's aim will be to serve as a benchmark, charting the strides social media companies are making in policing their platforms.

How is brand safety performance monitored?

  • The GARM Aggregated Measurement Report is based on existing first-party data and transparency reports provided by the likes of Facebook, Snap and Instagram. This is then aggregated by the WFA and constructed around four key questions marketers can use to assess progress over time.

  • These questions are: 1. How safe is the platform for consumers? 2. How safe is the platform for advertisers? 3. How effective is the platform in policy enforcement? 4. How does the platform perform in correcting mistakes?

  • Ultimately, the WFA hopes the report will provide a common and focused framework for ad industry stakeholders to make more informed decisions about their advertising investment.

Ok, so what does this aggregated data show?

  • The amount of harmful content removed from platforms remained consistent in the first and second halves of 2020, with 3.3bn pieces of content – including videos and images – purged for violating guidelines set out by the social media platforms.

  • More than four out of five pieces of content removed were done so because they fell under the spam, sexual content, and hate speech and acts of aggression categories, as set out by GARM.

  • Removal was no longer primarily reserved for individual pieces of content either. The WFA’s data showed an uptick in the number of user accounts removed for violating content across the digital spectre. All platforms that participated in the report, and which shared data or the first and second halves of 2020, noted an average growth in account removals of more than 30%, with removals up to 14.9m reported by some. For YouTube, this increase was 40%.

  • The data also illuminated a growth inaction taken by platforms on hate speech and acts of aggression across platforms, following on from a brand boycott last June which saw some of the world’s biggest advertisers pause social media spend in order to put pressure on big tech to tackle the problem.

  • The WFA saw intensified enforcement across metrics being shared by platforms. This included the advances made by YouTube in this area specific to account removals. Facebook too reduced it’s the amount of hate speech residing in its walls, noting a decrease of 20% from Q3 to Q4 in 2020.

  • These initial improvements have occurred amid an increased reliance on automated content moderation to help manage blocking and reinstatements due to Covid-19 disruptions that resulted in moderation teams working with limited capacity.

  • Pandemic-related workplace restrictions did have an impact on how platforms approached content moderation; automated and machine learning methods for content assessment and removal grew in its importance consistently across platforms.

  • Facebook, Instagram, and Pinterest report on the role of machine blocking at a category level, showing the highest machine moderation in areas such as: terrorism; violent graphic content; crime and harmful acts; and arms and ammunition.

Why marketers should care

  • Since 2017, when the Times of London ran a splash about how big brands were ‘funding’ terror, pornography and illegal content, brand safety has been top of mind for chief marketing officers.

  • In recent years, these fears have been exasperated by what has been perceived as a failure from social networks to effectively police the content within their own walls, coupled with concerns around online hate speech overspilling into real-life scenarios (such as the recent riots which seen Trump supporters storm the Capitol building in the US). All underscored by the question of how advertising dollars have propped up such content.

  • Recent data from marketing consultancy Ebiquity revealed that 65% of brands were exposed to non-safe environments in 2019, demonstrating that brand safety to be a complex, highly relevant issue.

  • The WFA’s report follows nine months of collaborative workshops between major advertisers, agencies and key global platforms working together as one of GARM’s Working Groups. It marks the first-time data in a single, agreed location around four core questions and eight authorized metrics that have been agreed as critical to tracking progress on brand safety.

  • By aggregating existing platform transparency reports and adding in policy-level granularity, the new document should create a common framework that will allow advertisers to assess progress against brand safety for each platform detailed. The new framework also highlights the use of best practice methodology.

  • Raja Rajamannar, chief marketing and communications officer at Mastercard and WFA president said: “This report is a great progress for our joint efforts, bringing together consistent and reliable data that marketers can depend on.

  • “It establishes common and collective benchmarks that reinforce our goals and help brand leaders, organizations and agencies make sure we keep media environments safe and secure.”

Media Planning and Buying Marketing Brand Safety

More from Media Planning and Buying

View all

Trending

Industry insights

View all
Add your own content +