Lego Technology WFA

P&G, Google, Lego & more unveil plan to suffocate ‘harmful’ content

Author

By Rebecca Stewart, Trends Editor

January 22, 2020 | 7 min read

Some of the world’s biggest brands – including Mars, P&G, Adidas, Lego and Unilever – have outlined a plan they hope will ultimately suffocate harmful content online by ensuring those spreading it have “no access” to advertiser dollars.

P&G, Google, Lego & more unveil plan to suffocate ‘harmful’ content

The announcement marks the first big initiative from the Global Alliance for Responsible Media / Lego

Along with Google, Facebook, several ad agency networks and trade bodies, around 40 household names have been involved in designing the blueprint.

Launched at Davos on Wednesday (22 July) the announcement marks the first big initiative from the Global Alliance for Responsible Media (GARM): the cross-industry working group founded by the World Federation of Advertisers (WFA) in 2019.

The ultimate aim of the three-pronged strategy is to prevent advertisers' media investments from fuelling the spread of content that promotes terrorism, violence or other behaviours that inflict damage on society.

The suffocate this type of content, the group’s first mission is to “raise the bar” in terms of identifying and eliminating videos and rhetoric uploaded by bad actors on the likes of YouTube and Facebook.

The three key tenets of the plan include:

  • reaching a consensus on what harmful content is;
  • developing tools that let brands and media agencies take better control of where their media spend is going to avoid putting it in the wrong place;
  • and establishing a set of shared measurement standards so marketers can fairly assess their ability to block, demonetise, and take down harmful posts and videos

“The consistency created by aligning the industry definitions, tools, and measurement is another step in our journey to create a better, brighter, safer and more trustful digital ecosystem for brands and society,” said Luis Di Como, executive vice-president of global media at Unilever.

Marc Pritchard, chief brand officer at P&G, agreed.

“It’s time to create a responsible media supply chain that is built for the year 2030—one that operates in a way that is safe, efficient, transparent, accountable, and properly moderated for everyone involved, especially for the consumers we serve,” he said.

“With all the great minds in our industry coming together in partnership with GARM, we can and should avoid the pitfalls of the past and chart a course for a responsible future.”

Ending a ‘reactive game of whack-a-mole’?

The multi-faceted bid to tackle the ills of the internet from advertisers and platforms follows on from a year in which the former came under immense pressure to think carefully about their ad placements, and the dollars they hand over to big tech.

Though platforms like Facebook and YouTube have in the past 12 months alone introduced their own measures to combat the spread of illegal or harmful content within their walls, the issue is a nuanced and emotive one. Not least because of the sheer amount of content that gets uploaded online each day.

Between July and September 2019, an estimated 620 million pieces of harmful content were removed by YouTube, Facebook and Instagram.

Because of the platforms’ investments in teams and tools, the majority of this content was removed before consumers actually saw them. However, approximately 9.2 million pieces of harmful content still reached consumers during that 3-month period, equating to roughly one piece of harmful content viewed per second.

It’s about much more than number crunching, though – there are moral issues at play too and even when content isn’t directly monetised via ads, questions linger about whether brands should be putting their money on platforms that give extremists breathing space.

Though it appeared the brand safety crisis had reached its peak in the aftermath of YouTube’s ‘brands funding terror’ moment in 2017, a fresh set of implications underlining this ethical quandy presented themselves in 2019 after a New Zealand terror attack – in which 52 people were murdered – was broadcast on Facebook Live.

Though the original clip wasn’t monitised, the first-person 17-minute long footage, was cut up and aired by local and global press. Excerpts from the video also found their way on to YouTube and Twitter. In the UK, tabloids were criticised for publishing the clip and, in some instances, running ads adjacent to it on their web pages.

Though Facebook updated policy around Live video after the event, Rob Rakowitz, Initiative Lead for GARM believes this new collaborative approach will pave the way for a less spontaneous way of working.

“Previous approaches to harmful content have been in part a reactive game of whack-a-mole,” he explained.

“We are convinced this uncommon collaboration is what is needed to change the game.

“Since our launch in June, we’ve made significant progress to raise the bar in terms of identifying and eliminating content uploaded by bad actors, for the benefit of brands, people and society at large,” he said.

What's actually going to change?

The Garm argues a collaborative approach to protecting the four billon consumers that use the web, is much needed.

With the initiative bringing together companies who represent about $97bn in global ad spending through 39 advertisers, six agency holding companies, seven media platforms and seven industry associations, it certainly has some clout.

The coalition’s main aims are to accelerate progress by means of a three-point strategy.

The first focus will be has developing and adopting common definitions to ensure that the industry is categorizing harmful content in the same way.

The 11 key definitions covering areas such as explicit content, drugs, spam and terrorism will enable platforms, agencies and advertisers to a shared understanding of what is harmful content and how to protect vulnerable audiences, including children.

Establishing these standards is the” first step” needed to stop harmful content from being monetised through advertising, said the WFA.

The next stage is more technical and will involve the creation of tools that better link advertiser controls and media agency tools with the platform’s own processes for categorising content.

This is all to improve transparency and accuracy in media buying and gear it towards building “safer consumer experiences”.

Step three will be about independent oversight and establishing shared measurement standards so the industry, and the duopoly, can assess their own ability to block, demonetise and remove harmful content. These capabilities will be independently verified to drive improvement for all parties.

“Digital media has fundamentally reshaped the way we connect with the world, and yet harmful and hateful online content has the power to tear us apart,” said GARM member and lead chief marketing officer for Mars Jane Wakely.

“By driving collective action across the industry, and developing safeguards to ensure advertising budgets aren’t fuelling harmful content, we’re striving to create safer online communities to protect consumers.”

Lego Technology WFA

More from Lego

View all

Trending

Industry insights

View all
Add your own content +