Online Safety Technology Twitter

Twitter rolls out new anti-abuse features and promises more in the coming weeks


By Rebecca Stewart, Trends Editor

February 7, 2017 | 3 min read

Twitter has announced three new tools designed to combat abuse within its walls, including a commitment to stopping trolls create accounts on the platform in the first instance.

twitter miscalculations

Twitter rolls out new anti-abuse features and says there's more to come

Announcing the news via a blog post, Twitter vice-president of engineering Ed Ho said that making Twitter a safer place was "a primary focus".

As such, the company will now stop the creation of new abusive accounts, present users with "safer" search results, and push potentially abusive or low-quality tweets further down users' timelines.

The last point means that only the most relevant conversations will be initially presented to users. Posts containing "low quality," or abusive content will still be accessible to those who seek them out. The move follows on from the introduction of Twitter's so-called troll filter last year, which gave users more powers to " filter lower-quality content, like duplicate Tweets or content that appears to be automated."

'Safe Search', meanwhile, will remove tweets that contain potentially sensitive content from already blocked or muted accounts from search results. Again, it will still be discoverable if users go looking for it but will no longer clutter searches.

Finally, and perhaps most importantly, Twitter is moving to identify people who have been permanently suspended from the platform in order to prevent them from springing up again via a new handle.

"This focuses more effectively on some of the most prevalent and damaging forms of behavior, particularly accounts that are created only to abuse and harass others," said Ho, promising that further updates would be rolled out in the coming weeks.

Twitter's clampdown comes as social media companies come under pressure from the UK government to curb online harassment. Suggested legislation under a new 'Malicious Communications (Social Media) bill' could see soon the formation of a register of social media firms operating in the UK regulated by Ofcom.

If successful, the House of Commons proposal means social platforms could face fees of up to £2m, or 5% of global turnover, for failing to filter abusive material.

Twitter is not the only social network looking to combat abuse, Facebook has introduced a new online safety centre and bullying prevention hub over the past six months.

The Times has reported this week that Facebook is among a roster of companies considering the inclusion of a police icon that users could click on if they felt threatened, however the company declined to comment when approached by The Drum.

Online Safety Technology Twitter

More from Online Safety

View all


Industry insights

View all
Add your own content +