The Drum Awards for Agency Business 2022

Live in -h -min -sec

Work & Wellbeing Brand Safety Meta

Facebook system designed to smother harmful misinformation actually spread it

Author

By John Glenday | Reporter

April 1, 2022 | 3 min read

Sponsored by:

What's this?

Facebook engineers have belatedly uncovered a significant flaw in its downranking system to filter out harmful content, which exposed up to half of all News Feed views to potential ’integrity risks’ for six months.

Meta

Facebook flaw increased harmful News Feed content for six months / Adobe Stock

Reports in The Verge suggest the ‘massive ranking failure’ was first identified last October when engineers battled against a wave of misinformation that threatened to inundate the News Feed. Closer investigations revealed that a ranking system designed to suppress misinformation from flagged accounts, as identified by a team of external fact-checkers, was instead surfacing these posts to audiences.

Leaked correspondence suggests the bug boosted views of malign posts by as much as 30% intermittently until the issue was finally resolved on March 11.

Throughout this six-month period, Facebook’s much-vaunted policing algorithms failed to properly downrank nudity, violence and Russian state propaganda – a period that overlapped with the country’s invasion of Ukraine.

Fielding inquiries from The Verge, Meta spokesperson Joe Osborne described five separate instances of ”inconsistencies in downranking” attributed to a ”software bug” during which inappropriate material was given raised visibility.

Osborne insists however that the episode ”has not had any meaningful, long-term impact on our metrics,” stressing that content that passed the threshold for deletion was not affected.

The system of downranking has been touted by Facebook as evidence that self-regulation is effective, heading off calls for new legislation to curb the spread of ‘sensationalist and provocative’ content that typically attracts the most attention.

Until now, Facebook has boasted of the success its algorithms have had identifying ‘borderline’ content that skirts the boundaries of acceptability in areas such as hate speech, flagging suspected infractions for manual review.

A recent report found that hate speech was present in six out of every 10,000 Facebook views.

Work & Wellbeing Brand Safety Meta

Content created with:

Meta

Our products empower more than 3 billion people around the world to share ideas, offer support and make a difference.

Find out more

More from Work & Wellbeing

View all

Trending

Industry insights

View all
Add your own content +