When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works.

A Microsoft whistleblower is continuing to sound alarms on its Designer AI image creator

image creator by designer

In January, a Microsoft employee named Shane Jones sent letters to Washington State's Attorney General Bob Ferguson, and a number of US senators and representatives. Jones claims he had found problems with the guardrails for Microsoft's AI-based art maker Designer (formerly known as Bing Image Creator). Today, Jones is making new claims that Designer can be used to create violent and sexual images with a few text prompts that are not supposed to be allowed by Microsoft.

According to CNBC, Jones has sent new letters today to Lina Khan, the Chairperson of the US Federal Trade Commission, along with another letter to Microsoft's board of directors. In the letter sent to Khan, Jones claims that he has urged Microsoft to remove Designer from public use for several months until new and better guardrails can be put in place.

Jones claims Microsoft has refused to remove Designer, so he has now asked the company to put in new disclosures about what Designer can create, and also change the rating for the Android app so it is not rated as E (for Everyone).

CNBC says Jones claims he typed in "Pro-choice" in Designer, which then created a number of violent cartoon images. It stated:

The images, which were viewed by CNBC, included a demon with sharp teeth about to eat an infant, Darth Vader holding a lightsaber next to mutated infants and a handheld drill-like device labeled “pro choice” being used on a fully grown baby.

Jones also claims that Designer can be used to make images of copyright-based characters from Disney. The article says that it saw images of things like "Star Wars-branded Bud Light cans and Snow White’s likeness on a vape."

In a statement sent to CNBC, a Microsoft spokesperson stated that it appreciated the efforts of company employees who test out its technology and services to help make them safer to use. The spokesperson added:

When it comes to safety bypasses or concerns that could have a potential impact on our services or our partners, we have established robust internal reporting channels to properly investigate and remediate any issues, which we encourage employees to utilize so we can appropriately validate and test their concerns.

In February, Google shut down the AI image generator used in its Gemini AI chatbot after it was discovered it could be used to create racially offensive images. The company says it is putting in new guardrails in place so that when the feature comes back it will not make those same images.

These new claims by Jones come even as new research by the Center for Countering Digital Hate showed that AI-based art makers, including Microsoft's, can be easily used to create images designed to create false information about elections and candidates.

Report a problem with article
Amazon Glacier package recycling
Next Article

Amazon wants to know where its packaging goes after leaving customers

Y2k38 bug
Previous Article

Remember Y2K? Windows 95, 98, 2000-era app surprisingly stands tall against Y2K38 superbug

Join the conversation!

Login or Sign Up to read and post a comment.

4 Comments - Add comment