Artificial Intelligence Transparency Report Hate Speech

Hate speech comprises 6 of every 10,000 Facebook views despite improvement

Author

By John Glenday, Reporter

May 20, 2021 | 4 min read

Facebook has laid bare the extent and severity of hate speech incidents across the social network in its latest quarterly Community Standards Enforcement Report.

facebook

Facebook acknowledges the shocking scale of hate speech in its latest enforcement update

The document pulls no punches in tabulating hard-hitting figures drawn from up to 12 policy areas such as Covid-19 vaccine misinformation, violent content and sexual activity. It lands just after Starbucks threatened to leave the platform due to the hate directed at its social purpose posts.

The numbers never lie

  • The headline hate speech prevalence rate on Facebook came in at 0.05% during the first quarter, equating to a prevalence rate of 5 to 6 views of violating hate speech content out of every 10,000 content views.

  • When looking at the broader trend, however, Facebook claims success, noting that incidences of hate speech have now decreased for three quarters in succession, having stood at 0.07% in the fourth quarter of 2020 and as high as 0.1% in the third quarter.

Facebook fights back

  • Since first publishing its first hate speech data report in 2017, Facebook has developed ever more sophisticated AI systems to such an extent that 97% of all hate speech is now removed before it is seen – up from 24%.

  • A similar picture can be found on Instagram, where a similar positive trend has seen the amount of content removed before it is seen jump from 45% to 93.4% since 2019.

  • The company has also introduced tougher penalties for people found to have abused the direct message system on Instagram, such as deactivating accounts and blocking filtered content via new privacy controls.

  • In a statement, Facebook wrote: “Our goal is to get better and more efficient at enforcing our Community Standards. We do this by increasing our use of Artificial Intelligence (AI), by prioritizing the content that could cause the most immediate, widespread and real-world harm, and by coordinating and collaborating with outside experts.”

Anti-vaxxers in Facebook’s crosshairs

  • A key priority for Facebook at present is to contain the spread of vaccine misinformation, with the company having removed 18m examples since the pandemic began.

  • The social network has displayed a further 167m warnings on more borderline examples, ensuring that 95% of people do not click through and are instead directed to debunking articles written by fact-check partners.

  • In all, Facebook has directed over 13.5m people toward official NHS and government websites on the issue, with a corresponding increase in vaccine acceptance of 4%.

Transparency push

  • The figures have been validated for the first time by accountancy firm EY to ensure third-party verification of the content moderation statistics, with Facebook stating: “We’re not grading our own homework.”

  • The social titan has also launched a dedicated Transparency Center to act as a one-stop-shop for communicating its efforts to develop policy, enforce content standards and report upon any progress made.

Looking behind the numbers

  • Renewed enthusiasm for moderation follows a 2020 boycott threat from top brands, concerned that about the potential to have their names besmirched through association with unpalatable content.

  • While Facebook appears to have successfully assuaged those fears, the issue rumbles on to this day in the form of a coordinated boycott of the platform by football clubs concerned about a social media silence on the racist abuse of players.

Artificial Intelligence Transparency Report Hate Speech

Content created with:

Meta

Our products empower more than 3 billion people around the world to share ideas, offer support and make a difference.

Find out more

More from Artificial Intelligence

View all

Trending

Industry insights

View all
Add your own content +