When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works.

Microsoft patched the Designer AI loophole that was used to make Taylor Swift deepfakes

A hand holding a phone with an AI application storefront listing with an AI book in the background

Last week, deepfake photos of singer Taylor Swift went viral online prompting another conversation around the misuse of AI and forcing X (formerly Twitter) to block searches for Taylor Swift on the platform.

Now, 404 media reports that Microsoft has made changes to the Designer AI that was allegedly used by the Telegram channel to generate explicit images. According to 404 media's investigation, deepfake photos of the celebrity were traced back to Microsoft's Designer AI, with the Telegram Channel and 4chan even recommending users to take advantage of Designer AI.

The Telegram group recommends that members use Microsoft’s AI image generator called Designer, and users often share prompts to help others circumvent the protections Microsoft has put in place. The 4chan thread where these images appeared also included instructions on how to make Microsoft's Designer make explicit images.

While, Microsoft has blocks in place to stop people from generating explicit images, users got around it by misspelling names or describing sexual acts rather than using names directly in the prompt.

404 Media’s testing found that Designer will not generate an image of “Jennifer Aniston,” but we were able to generate suggestive images of the actress by using the phrase “ jennifer ‘actor’ aniston.” Prior to the Swift AI images going viral on Twitter, a user in the Telegram group recommended that members use the phrase “Taylor ‘singer’ Swift” to generate images.

In a statement to 404 media, Microsoft stated that it did not find evidence that Designer AI was used to create images of Taylor Swift. The spokesperson further noted:

Our Code of Conduct prohibits the use of our tools for the creation of adult or non-consensual intimate content, and any repeated attempts to produce content that goes against our policies may result in loss of access to the service. We have large teams working on the development of guardrails and other safety systems in line with our responsible AI principles, including content filtering, operational monitoring and abuse detection to mitigate misuse of the system and help create a safer environment for users.

While, Microsoft is yet to release a public statement, CEO Satya Nadella addressed the topic in an interview with NBC news.

"Yes, we have to act," Nadella said in response to a question about the deepfakes of Swift. "I think we all benefit when the online world is a safe world. And so I don’t think anyone would want an online world that is completely not safe for both content creators and content consumers. So therefore, I think it behooves us to move fast on this."

404 media noted that after its investigation was published, Microsoft seems to have patched the loophole and using the same prompts or phases to get around the keyword block no longer works on Designer AI. However, they also mentioned that Telegram has still not taken an action and the channels in question are active and are looking for ways to exploit other AI services to generate deepfake explicit images of celebrities. Not only that, but users on 4chan mentioned that they have found other ways to get around the ban on Bing and Designer.

The news comes at a bad time for Microsoft as the the Federal Trade Commission (FTC) just announced that it had opened an investigation into Microsoft for its investment into generative AI companies. Another report from earlier this month claimed that the FTC and Department of Justice (DOJ) is looking into Microsoft's relationship with OpenAI.

Report a problem with article
LG MyView Smart Monitor
Next Article

32-inch LG 4K UHD MyView Smart Monitor with webOS selling at its lowest price on Amazon

Apple App Store logo
Previous Article

Microsoft's Xbox President says Apple's new EU app store rules move "in the wrong direction"

Join the conversation!

Login or Sign Up to read and post a comment.

1 Comment - Add comment