When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works.

Tay, Microsoft's chatbot, is offline after Twitter turned it into a racist nutjob

Microsoft unveiled Tay, its Twitter conversation bot, just yesterday with the aim of researching human interaction and speech models. Now, barely more than twenty-four hours later, the AI chatbot has gone offline, after it started sending out racist, homophobic, sexist and utterly nonsensical tweets.

So what exactly happened? The AI bot was a complete success in some ways. It quickly learned what its audience was saying and started responding in kind. Unfortunately, what it learned mostly came from Twitter trolls who were trying to make the bot say the most hateful or outrageous things possible. And they easily succeeded, especially since the bot had a “repeat after me” function.

Now Microsoft has taken the bot offline for “upgrades” and such. If you try to chat with it, Tay responds that it’s away to get its annual updates. But there’s little doubt that Microsoft is reconsidering its approach to how it handles the massive input of data and language, especially when that input is poisoned by trolls. Right now there’s no word on when or indeed if Tay might come back.

In some ways this could be considered a very interesting social experiment: what happens when you expose a blank - and, by definition, naive - mind to the world. It’s also a very interesting data point for the future of AI and our interactions with it. As machines become smarter, and artificial intelligence becomes a daily presence, there’s little doubt that issues like these will crop up more and more.

Source: The Guardian

Report a problem with article
Next Article

Save 94% off a Network+ (N10-006) Certification Training Bundle

Previous Article

Anonymous tells ISIS to 'be afraid' after Brussels attack

Join the conversation!

Login or Sign Up to read and post a comment.

44 Comments - Add comment