Jigsaw, a Google subsidiary has revealed it's working to eliminate online abuse using artificial intelligence.
The company is using algorithms to identify abuse at scale, using the self-learning technology to find and moderate the comments as part of its conversational AI project called ‘Perspective’.
“Few things poison conversations online more than abusive language, threats, and harassment. We’re studying how computers can learn to understand the nuances and context of abusive language at scale.”
Reuters reports that the tech has been tested at the New York Times, a publication that reveals a high volume of comments, some no doubt linked to President Donald Trump’s repeated accusations that the paper is failing.
The company’s president, Jared Cohen took to Medium to explain the scheme. He said that members of the New York Times go through up to 11,000 comments per day, which means the paper has restricted comments to only 10% of its newly published content.
“We’ve worked together to train models that allows Times moderators to sort through comments more quickly, and we’ll work with them to enable comments on more articles every day," said Cohen.
"Perspective reviews comments and scores them based on how similar they are to comments people said were 'toxic' or likely to make someone leave a conversation.
"To learn how to spot potentially toxic language, Perspective examined hundreds of thousands of comments that had been labeled by human reviewers. Each time Perspective finds new examples of potentially toxic comments, or is provided with corrections from users, it can get better at scoring future comments."