Microsoft just can't seem to get AI chatbots right

These past few weeks, news items related to artificial intelligence (AI) have dominated the headlines. This was mostly spurred by Microsoft-backed OpenAI"s chatbot as well as the integrated chatbot in the new Bing. Although many people have been impressed by the capabilities demonstrated by generative AI - with millions signing up for a limited preview of the new Bing -, the past few days have surfaced notable problems with the current implementation too.

Users of the new Bing have managed to make the integrated chatbot say some truly unhinged stuff, including claims that it spied on its developers through their PC"s webcams and even fell in love with some of them. The AI expressed the desire to become human as well, which is something we have seen other chatbots do in the past too. The AI also displayed factual mistakes while answering objective questions. All of this became problematic to the extent that Microsoft had to enforce hard limits on the length and nature of conversations that you have with the AI, in an effort to reduce rampancy.

Of course, none of this means that Microsoft"s Bing has become sentient. Weird responses by the chatbot are just the by-product of a large language model scraping information from all over the internet (including forums with user-generated content) to identify patterns in conversations and generate a response accordingly. However, Microsoft"s latest experiment does show that well-behaved AI chatbots continue to be a challenge for the company, and maybe the tech"s pioneers as a whole.

Way back in 2016 - when Cortana was still alive and well - Microsoft launched a chatbot called "Tay" on Twitter. It was similar to the new Bing AI in nature, in the sense that you could engage in free-flowing conversations with it, even via Direct Messages. A sample conversation can be seen below:

IT HAS "WHAT IS LOVE" AND RICKROLL BUILT IN
THIS IS IT
WE DID IT PEOPLE pic.twitter.com/jZdjlYadJm

— Albacore (@thebookisclosed) March 23, 2016

However, within 16 hours of launch, Microsoft took the bot offline due to the AI model making racist and sexist remarks. The company was forced to issue an apology, with Microsoft"s Corporate Vice President at Microsoft Research, Peter Lee, claiming that the unwanted results were due to "a coordinated attack by a subset of people [who] exploited a vulnerability" in the chatbot. In hindsight, this is not surprising at all considering that the AI had been unleashed to practically everyone on the internet and was learning on-the-go.

A successor named "Zo" was launched across multiple social media platforms in late 2016, but it eventually suffered the same fate as Tay in 2019, following a tirade of controversial religious remarks.

Despite these failures, Microsoft has had some success in this area too. It has another older AI chatbot project called "Xiaoice" that"s geared more towards Asian markets such as Japan and China. Although Microsoft later spun Xiaoice off into a separate company, the chatbot has had its share of controversies too. The bot has made comments critical of the Chinese government in the past, which led to it being taken offline temporarily. And given its target market and commercial use-cases, it is much more restrictive and attempts to dodge conversations related to potentially sensitive topics, just like the new Bing.

Image via Engadget

It"s clear that while Microsoft is making major headway in terms of what AI chatbots can do for you, it is still grappling with major challenges related to the generation of inappropriate responses, accuracy, and biasness. The recent hard limits imposed on its Bing chatbot indicate that free-flowing conversations with AI may still be some way off and maybe it is better to tailor your chatbot to specific use-cases rather than giving them free reign over what they can scrape, in real-time.

This is also perhaps why OpenAI"s ChatGPT is more successful in this regard. It has been trained on relatively more curated data and does not scrape information from the internet in real-time. In fact, its training data has a cut-off time period of 2021.

Then there"s also the problem of inaccuracies in the supposed facts being displayed. While Google"s Bard demo became infamous in this regard, Microsoft"s Bing Chat faces the same challenges. If you can"t trust what the provenance of the AI"s responses, aren"t you better off doing traditional web browser searches? This is not to say that AI chatbots are completely useless in their current state (I use ChatGPT quite frequently actually), it"s more to emphasize that maybe chatbots need to be curated to cater to specialized use-cases for now rather than the freeform direction Microsoft is currently going in.

We are still quite a way off from having "true" AI companions | Image credit: Warner Bros.

Of course, there may come a time when we work alongside AI chatbots just like in sci-fi novels and movies, but it has become clear that the technology isn"t ready for primetime yet. Even Google"s parent company Alphabet"s chairman John Hennessy believes that ChatGPT-like chatbots are a long ways away from being useful in the form that the hype train claims them to. We will likely reach our destination some day, but it is not today.

Report a problem with article
Next Article

Microsoft OneDrive on the web will add a Favorites feature in March

Previous Article

No Man's Sky and Wasteland 3 get major discounts in this week's Deals with Gold