Instagram turns to AI to tackle 'toxic' online abuse
Instagram is to employ AI to ensure it remains "a safe place" by filtering out abusive comments and spam.
Instagram will use machine learning to filter out abusive comments and encourage people to post more
The company said many users have noted that "toxic comments" discourage them from posting to the platform, and as such it has introduced two new tools to protect the community-feel of the app, including a feature which will identify and block certain offensive comments.
An opt-in anti-abuse feature is being rolled out to Instagram users from today, allowing them to automatically hide comments around posts and live videos that may be offensive.
The tool is powered by machine learning, with Instagram saying in a blog post that it has been training its systems for "some time" to recognize certain types of comments, saying that the capabilities of the software will improve over time.
A second feature will scan comments for any "obvious spam" and block those it deems to be troublesome. It will remove spam written in English, Spanish, Portuguese, Arabic, French, German, Russian, Japanese and Chinese. The anti-abuse filter, meanwhile, is only available in English at the moment with plans to release updates in nine more languages.
The addition of an anti-harassment function follows on from a feature Instagram launched back in September last year which lets users prevent comments containing specific words from appearing under their images and videos.
At the time, the update courted a mixed reaction, with some commentators questioning whether handing over the reigns to users was the best way to combat online abuse. Questions were also raised about the potential scope for brands to use the tool to hide negative comments.
The latest feature is also likely to stir reaction. In February, Twitter unveiled a slew of anti-troll features, including one similar to Instagram's in the form of an algorithm that pushes potentially abusive and "low quality" tweets further down users' timelines. At the time it was accused by some of hiding abuse rather than tackling it from the ground upwards.
"We believe that using machine learning to build tools to safeguard self-expression is an important step in fostering more inclusive, kinder communities," said Kevin Systrom, chief executive and co-founder of the Facebook-owned social network. "Our work is far from finished and perfect, but I hope we’re helping you feel safer and more welcome on Instagram."