The Drum Awards for Marketing - Extended Deadline

-d -h -min -sec

Microsoft Artificial Intelligence Tay

Microsoft’s WeChat-based AI bot Xiaobing isn’t racist like Tay

Author

By Charlotte McEleny | Asia Editor

April 4, 2016 | 2 min read

Last week Microsoft had to take down and make adjustments to its artificially intelligent bot on Twitter, called Tay, after people taught it how to be racist, sexist and xenophobic.

Online news website Tech in Asia has replicated the stunt on the Chinese-language equivalent that lives on WeChat, Xiaobing, which seems less impressionable.

Both Tay and Xiaobing were built to try and replicate a young, teenage girl on social media, in a bid to prove that AI can have a tone of voice that’s the opposite to what you’d expect from a bot. However, flaws in Tay were exposed last week when people tweeted harmful views, which she then proceeded to tweet out in a full Twitter meltdown.

Charlie Custer, an editor at Tech in Asia, took to WeChat to replicate some of the things said to Tay on Twitter. While some of the questions led Custer to question its sophistication, the overall conclusion of the experiment was that Xiaobing was more resilient to the jibes.

Xiaobing actually almost concluded herself that Custer was a bad person - of course all of his prompts were taken from the original incident, not reflecting his views whatsoever.

Part of the reason for this could be because Xiaobing had already been penalised for privacy and foul language concerns by WeChat. The ‘girlfriend’ bot had been banned in 2014, only to make its return after Microsoft worked closely with WeChat to ensure it was safer for other users.

Microsoft Artificial Intelligence Tay

More from Microsoft

View all

Trending

Industry insights

View all
Add your own content +