When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works.

Microsoft: "We're deeply sorry for unintended offensive tweets" from Tay chatbot

Earlier this week, Microsoft launched a chatbot, known as 'Tay', as an interactive example of its progress in artificial intelligence and machine learning capabilities. Aimed at 18- to 24-year-olds, Tay was available for users to chat with - publicly and via direct messaging - on Twitter; indeed, that was the whole point of the exercise, as the chatbot's bio noted that "the more you talk the smarter Tay gets".

Most users enjoyed putting Tay to the test with a broad range of enquiries, and conversations on all sorts of topics. But it didn't take long for the social experiment to go horribly wrong.

Tay's ability to learn from human interactions, to help it to respond to whatever subject matter was raised, left it wide open for abuse. If you talked to Tay about taboo topics, the chatbot - lacking any sense of morality or social propriety - would respond as best it could, but many of the responses proved to be extremely offensive. The situation was only made worse by the fact that users were also able to tell Tay to repeat what they had said - and when some of those 'repeat after me' tweets were spouting shockingly hateful sentiments, things quickly got out of hand.

After these offensive, and sometimes disturbing, tweets were widely shared without context, Microsoft eventually decided to take the chatbot offline.

Today, Peter Lee, Corporate Vice President at Microsoft Research, published a blog post apologizing for the situation, and promising that lessons will be learned from it:

We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay. Tay is now offline and we’ll look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values.

Lee said that Microsoft didn't go into Tay's deployment blindly; the company had already conducted a successful deployment of its XiaoIce chatbot in China - launched in 2014, and now used by 40 million people - which helped to inform Tay's development.

As he went on to explain:

As we developed Tay, we planned and implemented a lot of filtering and conducted extensive user studies with diverse user groups. We stress-tested Tay under a variety of conditions, specifically to make interacting with Tay a positive experience. Once we got comfortable with how Tay was interacting with users, we wanted to invite a broader group of people to engage with her. It’s through increased interaction where we expected to learn more and for the AI to get better and better.

Clearly, things didn't go quite as Microsoft had hoped they would following Tay's launch on Twitter. Lee blamed the debacle on "a coordinated attack by a subset of people [who] exploited a vulnerability" in the chatbot. He continued:

Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack. As a result, Tay tweeted wildly inappropriate and reprehensible words and images. We take full responsibility for not seeing this possibility ahead of time. We will take this lesson forward as well as those from our experiences in China, Japan and the U.S. Right now, we are hard at work addressing the specific vulnerability that was exposed by the attack on Tay.

As you would imagine, Microsoft isn't giving up on its efforts. The company has invested heavily in machine learning capabilities over the last few years - from the development of its Cortana digital assistant, to its recent acquisition of SwiftKey. While the controversy surrounding Tay's launch is still fresh, it will no doubt prove to be an important lesson to help Microsoft to refine its machine learning models to help improve future development of its artificial intelligence projects. Lee concluded:

Looking ahead, we face some difficult – and yet exciting – research challenges in AI design. AI systems feed off of both positive and negative interactions with people. In that sense, the challenges are just as much social as they are technical. We will do everything possible to limit technical exploits but also know we cannot fully predict all possible human interactive misuses without learning from mistakes. To do AI right, one needs to iterate with many people and often in public forums. We must enter each one with great caution and ultimately learn and improve, step by step, and to do this without offending people in the process. We will remain steadfast in our efforts to learn from this and other experiences as we work toward contributing to an Internet that represents the best, not the worst, of humanity.

Microsoft hasn't yet given any indication of when its chatbot will return - for now, if you try to interact with Tay, she'll tell you that she's offline to get some updates.

Report a problem with article
Next Article

This Weekend's PC Game Deals: Spring sales abound

Previous Article

Microsoft's Lumia 950 and 950 XL heading to Australia's largest carrier, Telstra

Join the conversation!

Login or Sign Up to read and post a comment.

30 Comments - Add comment