Technology Facebook

Facebook to suspend 'bad actors' regularly violating community standards from Live

Author

By Shawn Lim, Reporter, Asia Pacific

May 15, 2019 | 4 min read

Facebook has announced that users who abuse its community standards policies will be suspended from using Facebook Live. They will be blocked for a period of time after what the network deems a single serious offence.

bbbbbbbbbbbbbbb

This comes after the Christchurch terror attacks, where the terrorist live-streamed his massacre of 50 people.

This comes after the Christchurch terror attacks, where a terrorist live-streamed a massacre on Facebook. The video then went viral, forcing Facebook to remove over 1.5m shared videos related showing the attack.

Presently, Facebook removes content that violates its community standards - users who breach these systems will be barred from using Facebook and Live for a period of time. In certain cases, where the user repeats low-level violations or commit a single egregious violation like using terror propaganda in a profile picture or sharing images of child exploitation, Facebook will ban them altogether.

The tech giant said it will now apply a ‘one strike’ policy to Live, which means anyone who violates its most serious policies will be restricted from using Live for set periods of time – for example, 30 days, starting on their first offense.

For example, someone who shares a link to a statement from a terrorist group with no context will now be immediately blocked from using Live for a set period.

Guy Rosen, Facebook vice president of integrity, said: “Our goal is to minimise the risk of abuse on Live while enabling people to use Live in a positive way every day.

“We plan on extending these restrictions to other areas over the coming weeks, beginning with preventing those same people from creating ads on Facebook.

“We recognize the tension between people who would prefer unfettered access to our services and the restrictions needed to keep people safe on Facebook. Our goal is to minimize risk of abuse on Live while enabling people to use Live in a positive way every day.”

Facebook also announced it will be investing $7.5m in new research partnerships with academics from The University of Maryland, Cornell University and The University of California, Berkeley to detect manipulated media across images, video and audio, and distinguish between unwitting posters and those who intentionally manipulate videos and photographs.

The social network reportedly realized days after the Christchurch attack that many versions of the video were uploaded and shared, rather than from a handful of viral sources, explaining why its removals were slow.

“Dealing with the rise of manipulated media will require deep research and collaboration between industry and academia — we need everyone working together to tackle this challenge. These partnerships are only one piece of our efforts to partner with academics and our colleagues across industry — in the months to come, we will partner more so we can all move as quickly as possible to innovate in the face of this threat,” it said.

“This work will be critical for our broader efforts against manipulated media, including deepfakes (videos intentionally manipulated to depict events that never occurred). We hope it will also help us to more effectively fight organized bad actors who try to outwit our systems as we saw happen after the Christchurch attack.”

The platform previously removed a batch of ‘dangerous users’, deemed to have stepped beyond the pale by maliciously spreading fake news and right-wing extremism, as it steps up efforts to get its house in order.

Falling foul of the clampdown were the likes of former Breitbart News editor Milo Yiannopoulous, InfoWars founder Alex Jones and Nation of Islam leader Louis Farrakhan, who are among a range of prominent far-right figures, conspiracy theory advocates and anti-Semitic individuals to be silenced.

Technology Facebook

More from Technology

View all

Trending

Industry insights

View all
Add your own content +