Artificial Intelligence Technology Facebook

Facebook codes vigilant AI software to red flag offensive live streams

Author

By John Glenday, Reporter

December 2, 2016 | 2 min read

Facebook is busying itself with the development of a new artificial intelligence software which it claims will be capable of automatically flagging offensive live streams to editors as soon as they are broadcast.

The prototype Facebook Live tool is the latest in a series of software advances being pioneered by the social networking site as it seeks to harness technology to take on the gargantuan task of monitoring its 1.8bn users to ensure nobody steps out of line.

It follows a number of high profile cases in which Facebook was accused of an inconsistent approach to moderation, which has seen it censor images of breast feeding mums and a famous Vietnam war photo featuring nudity – whilst allowing fake news to proliferate.

Joaquin Candela, Facebook’s director of applied learning said that the system would be capable of identifying ‘nudity, violence, or any of the things that are not according to our policies,’ but that it still has two main hurdles to cross: “One, your computer vision algorithm has to be fast, and I think we can push there, and the other one is you need to prioritize things in the right way so that a human looks at it, an expert who understands our policies, and takes it down.”

At present moderation efforts rely primarily on users themselves reporting content which they deem to have overstepped the mark, before a Facebook employee manually checks to see if it falls foul of the company’s ‘community standards’.

Particularly sensitive decisions are escalated up the management tree to top executives.

Artificial Intelligence Technology Facebook

More from Artificial Intelligence

View all

Trending

Industry insights

View all
Add your own content +