By continuing to use The Drum, I accept the use of cookies as per The Drum's privacy policy

Facebook on its moderation of child porn and extremist videos: ‘It is clear that we can do better’

Facebook has come under question

Facebook’s reluctance to take responsibility for the content circulated on its network could see it face criminal prosecution according to a prominent UK QC, furthermore it has drawn criticism from the Prevention of Cruelty to Children (NSPCC) for an apparent inability to remove select child pornography images, a shortcoming that "poses serious questions".

Several examples of child pornography and terror content flagged on the social network were not been removed by Facebook, reports the Times, underlining an issue with moderation on the site. The issue of child pornography groups on Facebook was also previously underlined by the BBC.

The social network diminishes its responsibility towards such content by not fully embracing its position as a media owner, rather a distributor of content - and one that ardently stands behind its users' right to free speech.

Julian Knowles QC warned of the potential criminal charges that could be leveraged towards the company, he told The Times that illicit content involving children flagged up to him would “undoubtedly breach UK indecency laws”. In addition, an Isis execution video reportedly encourages terrorism which could breach the Terrorism Act 2006.

He said: “I would argue that the actions of people employed by Facebook to keep up or remove reported posts should be regarded as the actions of Facebook as a corporate entity. If someone reports an illegal image to Facebook and a senior moderator signs off on keeping it up, Facebook is at risk of committing a criminal offence because the company might be regarded as assisting or encouraging its publication and distribution.”

Facebook said it was "grateful to The Times for bringing this content to our attention," adding: "We have removed all of these images, which violate our policies and have no place on Facebook. We are sorry that this occurred. It is clear that we can do better, and we'll continue to work hard to live up to the high standards people rightly expect of Facebook.”

The social network claims it is improving its moderation of such content. It boasts image recognition tech capable of identifying known exploitative images. It also works with investigators to build cases around those sharing the images. Furthermore can flag up illicit or offensive content but it's also looking to automate many of these processes with AI. It is currently developing an AI capable of identifying such content whereupon it is flagged to a human moderator for review.

These strains of undesirable content also feed into the theme of brand safety, while agencies and advertisers distance themselves from Google and YouTube until it cleans up its act, Facebook, which now boasts five million advertisers could face the same scrutiny, especially after well-publicised over-estimations of its viewer metrics rocked the ad industry.

On the issue, a spokesperson for the National Society for the Prevention of Cruelty to Children (NSPCC) told The Drum: “This is yet another appalling example of Facebook's failure to remove inappropriate and disturbing content from its own website despite users flagging it to them.

“It poses serious questions about what content they consider appropriate and what their community standards actually consist of if they are disregarding users’ legitimate concerns. More and more young people are telling Childline about upsetting content they are seeing online so it’s crucial that social media platforms stop making up their own rules when it comes to child safety.

Anyone who sees child sexual abuse images are can report the content Child Exploitation and Online Protection Centre (CEOP) or Internet Watch Foundation (IWF), additionally they can contact the NSPCC helpline for further information on 0808 800 5000.