The Drum Awards for Marketing - Extended Deadline

-d -h -min -sec

Artificial Intelligence Technology Facebook

Facebook to root out terrorists with behavior prediction AI

Author

By John Glenday | Reporter

February 17, 2017 | 3 min read

Facebook is set to channel its energies into thwarting terrorism by assigning its custom artificial intelligence (AI) software toward scanning content posted on the social network to identify signs, behaviors and beliefs which could lead to terrorism.

The all-knowing tool will eventually be able to read between the lines of all status updates to spot signs of terrorism, violence, bullying and suicide but founder Mark Zuckerberg, an avowed disciple of AI, warned such goals are still years away from being attained.

Facebook has come under fierce criticism for its seeming inability to tackle violent posts, trolling and so-called fake news, prompting it to redouble its efforts at automated censorship.

Highlighting the difficulty of the task at hand Zuckerberg wrote in an open letter explaining his vision of globalisation and the global community that Facebook had become: “The complexity of the issues we've seen has outstripped our existing processes for governing the community.

“We are researching systems that can read text and look at photos and videos to understand if anything dangerous may be happening.

"This is still very early in development, but we have started to have it look at some content, and it already generates about one third of all reports to the team that reviews content.”

As part of this learning curve Facebook is currently training its AI systems to differentiate between news reports concerning terrorism and actual terrorist propaganda.

Zuckerberg’s stated goal is to allow people to post any content they liked, within the bounds of the law, but with ever vigilant algorithms patrolling the site to filter specific material from users who do not wish to see it.

Of the issue around Fake news and the sharing of sensationalist headlines within the platform, he said that he hoped to introduce a measure within user's personal settings to offer the community options to see what they did and did not want.

"We will periodically ask you these questions to increase participation and so you don't need to dig around to find them. For those who don't make a decision, the default will be whatever the majority of people in your region selected, like a referendum. Of course you will always be free to update your personal settings anytime.

"With a broader range of controls, content will only be taken down if it is more objectionable than the most permissive options allow. Within that range, content should simply not be shown to anyone whose personal controls suggest they would not want to see it, or at least they should see a warning first. Although we will still block content based on standards and local laws, our hope is that this system of personal controls and democratic referenda should minimize restrictions on what we can share."

The letter spelled out his vision of continuing to build a connected global platform for the world as he hoped people would "act anew."

Artificial Intelligence Technology Facebook

More from Artificial Intelligence

View all

Trending

Industry insights

View all
Add your own content +