Microsoft unveiled Twitter artificial intelligence bot @TayandYou yesterday in a bid to connect with millennials and "experiment" with conversational understanding.
Billed as 'AI fam from the internet that's got zero chill!' Tay was meant to engage with her peers help the tech giant explore its cognitive learning abilities through "playful conversation".
“The more you chat with Tay the smarter she gets," said Microsoft.
Things started off fairly innocently:
The stunt however, took an unexpected turn when Tay's verified Twitter account began issuing a series of inflammatory statements after being targeted by Twitter trolls. The conversational learning curve saw the bot tweet posts from her verified account mentioning Hitler, 9/11 and feminism, some of which (including the below) have now been deleted.
However, many of its offensive tweets remain undeleted, including one in which she says Donald Trump "gets the job done."
Microsoft noted in its privacy statement for the project that Tay uses a combination of AI and editorial written by staff, including comedians, to generate responses, alongside relevant publicly available data that has been anonymised and filtered.
Things appear to have gone wrong for Tay because it was repeating fellow Twitter users' inflammatory statements, but Microsoft seems to have failed to consider the impact trolls could have on the experiment before it launched – The Drum has reached out to the company for comment on this process. Many users pointed out that how easily Tay was manipulated, revealed the pitfalls of machine learning.
The bot retreated from Twitter at 4.20am GMT this morning, saying it "needed sleep".
Microsoft is not the only brand to have its campaign hijacked this week, on Sunday a public initiative to name a new RRS ship threw up an unexpected frontrunner when online voters placed 'Boaty McBoatface' as the lead contender.