News
Marketing can change the world

Microsoft takes racist chatbot Tay offline to 'make adjustments' following Twitter trolling

Microsoft is scrambling to limit the PR damage after it's artificial intelligence bot, Tay, posted a string of deeply offensive tweets. The chatbot, designed to help the tech giant engage with 18 to 24 year-olds online, was modeled to speak like a "teen girl" but instead used its platform to praise Hilter and Donald Trump and share 9/11 conspiracy theories.

Tay fell foul of trolls, and thanks to it's conversational learning abilities – which allow it to mimic speech patterns – was taught how to say a plethora of unsavoury statements by Twitter users.

The machine has now been taken offline by Microsoft, with the company saying in a statement: "The AI chatbot Tay is a machine learning project, designed for human engagement. As it learns, some of its responses are inappropriate and indicative of the types of interactions some people are having with it. We’re making some adjustments to Tay."

Tay has been silent for the last 24 hours while Microsoft is deleting some of her worst status updates, a number of which used threatening or racist language.

The bot also ended up targeting individuals, singling out games designer and anti-harassment campaigner Zoe Quinn, who had been trolled online during Gamergate in 2014.

“Wow it only took them hours to ruin this bot for me," Quinn tweeted.

“This is the problem with content-neutral algorithms,” she added: “It’s 2016. If you’re not asking yourself ‘how could this be used to hurt someone’ in your design/engineering process, you’ve failed."

“It’s not you paying for your failure. It’s people who already have enough shit to deal with," she concluded.

Rebecca Stewart

Rebecca Stewart is The Drum's breaking news and social media reporter, covering how brands and media companies are using platforms like Snapchat, Facebook, Twitter and beyond.

All by Rebecca