Facebook has been under intense scrutiny lately for its policing of hate speech, misinformation and more – particularly with the US election on the horizon. And with it facing a boycott from top brands as a result, it has put out a new report to show just how the lockdown is impacting its moderation capabilities.
The Drum takes a look at some of the highlights from the report.
What does the Community Standards Enforcement Report, August 2020 Edition say?
This is the sixth edition of the Community Standards Enforcement Report. Now quarterly, it includes metrics across 12 policies on Facebook and 10 policies on Instagram.
Its reviewers have been working from home as they identify content that breaches Facebook community policies. While they check the content, its tech can identify and remove different cuts.
Home-working has meant “fewer content reviewers“. And this in turn has meant it has taken action on “fewer pieces of content on both Facebook and Instagram for suicide and self-injury, and child nudity and sexual exploitation on Instagram“. Also, it has “prioritized and took action on the most harmful content within these categories“.
Appeals are lower too, and it says it “couldn’t always offer them“.
Facebook claims that “prioritizing removing harmful content over measuring certain efforts“ means it couldn’t calculate the prevalence of violent and graphic content, adult nudity and sexual activity. It expects to be able to share those in the Q3 report.
What do the figures look like?
Facebook says improvements to technology have enabled it to ”take action on more content in some areas” and to increase its ”proactive detection rate in others”.
The proactive detection rate for hate speech on Facebook has increased from 89% to 95% and it has went from taking action on 9.6m pieces of content in Q1 to 22.5m in Q2. Its detection tech has also been rolled out in Spanish, Arabic and Indonesian, and improvements have been made in existing languages.
On Instagram, the proactive detection rate for hate speech increased from 45% to 84%. It took action on 808,900 pieces of content in Q1 and 3.3m pieces in Q2. The language expansion ran on Instagram too.
It removed more terrorism content than ever before. On Facebook, it went from taking action on 6.3m pieces of content in Q1 to 8.7m in Q2. ”We saw increases in the amount of content we took action on connected to organized hate on Instagram and bullying and harassment on both Facebook and Instagram.”
Has it made progress?
On its last earnings update, where it posted a surprising 11% revenue growth and showed just why it is the engine of SME advertising, chief operating officer Sheryl Sandberg made the claim that it does not profit from hate. ”We don’t benefit from hate speech. We never have. Users don’t want to see it. Advertisers don’t want to be associated with it.”
This latest edition of its report says it has ”made progress” in combating hate on its apps. ”We know we have more to do to ensure everyone feels comfortable.”
It has established ”new inclusive teams and task forces”, including the Instagram Equity Team and the Facebook Inclusive Product Council. This will help it build products that are deliberately fair and inclusive (it has been accused of being slow to act on Conservative issues, while conservatives claim it is lax on liberals).
It is updating its policies to more specifically account for certain kinds of implicit hate speech, ”such as content depicting blackface, or stereotypes about Jewish people controlling the world”.
The growth of QAnon and white supremacy groups have been particularly levied at Facebook. It says that since October 2019, it has ”conducted 14 strategic network disruptions to remove 23 different banned organizations, over half of which supported white supremacy”.
Finally, it said it will undergo an independent, third-party audit, starting in 2021, to validate its numbers.
Across its apps, Facebook boasts 3.14 billion monthly users and there’s never before been a company with such an immediate influence on the world.
The Facebook suite of apps has proven effective at fulfilling its mission statement of connecting people, but this rapid growth has come at the expense of content moderation. The idea that it is responsible for what it essentially publishes, however, has been one that it has been pushing against.
More than 1,100 advertisers boycotted Facebook in July, but the top 100 advertisers make up less than 20% of Facebook’s ad revenue. Furthermore, many were using the opportunity to benefit from reduced marketing spend anyway.
Jacob Dubbins of the Conscious Advertising Network has talked The Drum through the biggest ethical issues facing marketers. He says his group does not encourage a boycott of Facebook, adding that ”the problem is much bigger than Facebook and one platform”.
Nonetheless, if the boycotters did get their way and the Facebook empire were to collapse, where would media buyers put their spend? The Drum recently asked that question and whether it would really improve the world.