Read our new manifesto

Did you miss the deadline?

There’s still time, request your extension

The Drum

Twitter to use algorithm to identify abusive accounts

Twitter to use algorithm to identify abusive accounts

In its latest blog post, an update on safety, Twitter announced it has launched an effort to use algorithms to find and reduce abusive content.

Ed Ho (@mrdonut), vice-president of engineering at Twitter, wrote in a blog that the company is working to make Twitter safer, including introducing updates that leverage their technology to root out abusive content, give users more tools to control their experiences and communicate with users more clearly regarding actions they take.

To help combat abusive content, which has largely been reported in the past by users to the social media site, Twitter is identifying accounts that engage in abusive behavior by using algorithms, then taking action by limiting account functionality for a set time, including allowing only the account’s followers to see their tweets. Ho noted that the change could be enacted if an account is repeatedly tweeting “without solicitation at non-followers or engaging in patters of abusive behavior that is in violation of Twitter Rules.”

While the blog noted that people are free to share their viewpoints, if an account is in repeated violation of the rules, they will take further action.

“We aim to only act on accounts when we’re confident, based on our algorithms, that their behavior is abusive. Since these tools are new we will sometimes make mistakes, but know that we are actively working to improve and iterate on them everyday,” wrote Ho.

This continues efforts announced last month to make the social platform safer, including updating how users can report abusive tweets, stopping the creation of abusive accounts, and implementing safer search results, among others, though its first anti-abuse tool showed that it encouraged rather than curtailed abuse.

Twitter is also introducing new filtering options for notifications to give users more control over what they see from certain accounts, including those without a profile picture – the amorphous egg accounts – or those with unverified email addresses and phone numbers.

In addition, the company is expanding the mute feature, letting users remove certain keywords, phrases and entire conversations from notifications, something users had been requesting. They are also improving transparency and openness in reporting, with Ho stating that users will be notified with they’ve received reports and informed if the company takes further action.

Ho noted that the company has made mistakes in the past but asked for patience, support and continued feedback as they implement the changes.