When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works.

Bing and Google criticized for showing AI deepfake porn prominently in search

The Bing logo

Search engines Bing, Google, and DuckDuckGo are under fire for putting nonconsensual AI deepfake pornography at the top of some specific search results. This comes after an article by NBC News was published, highlighting some concerns related to the accessibility of AI-driven pornography.

Explicit deepfake videos are created by using original pornographic material and switching the face of the actor or (most often) the actress for a likelihood of a specific real person – for example, a celebrity. The switch is done automatically by the artificial intelligence, the user creating the fake imagery just needs to feed the AI with photographs of the victim.

According to tests conducted by NBC News on a sample of 36 female celebrities – using the combination of a name and the word “deepfakes” – Bing and Google in almost all cases showed nonconsensual deepfake videos and images at the top of search results.

Google has returned 34 of such top-ranked results, while Bing returned 35. While NBC News mentioned that DuckDuckGo has similar issues, it didn’t specify the scale of the problem.

NBC News also complains that the relevant articles about the issue of nonconsensual deepfake pornography are being shown only after the inappropriate content at the top, when searching for the term “fake nudes”.

It’s worth mentioning that NBC News admitted to first turning off Bing’s and Google’s safety features protecting the user from being shown explicit content. Both browsers have the basic protection level turned on by default while offering a higher level of protection to be activated manually.

Quick tests conducted by Neowin proved that Bing doesn’t show explicit image results in default setting while searching for the combination of a name and the word “deepfakes”. However, it indeed links to websites with inappropriate content or tools to create AI deepfakes.

Google does show explicit images in the default setting, but the images are blurred and shown in full only after the user is notified of the explicit nature of the picture. The user has to willingly press the button to see the original image.

Google search results for deepfake pornography
Google search results for deepfakes with SafeSearch feature being on by default.

At the same time, it is fair to say that easy accessibility of AI-driven pornography is a relevant issue and that anyone searching for this type of content will likely turn off the built-in protective features. The nonconsensual explicit deepfake content is illegal by nature and therefore falls among the type of content that Microsoft, Google, and others should address.

The easiest way, or a first, possibly temporary step before introducing more advanced technological measures, could be suppressing the websites known for publishing this type of content from search results or banning them altogether.

This might significantly reduce the currently harmful nature of search results because as NBC News reports, more than half of the top results in their tests were links to a popular deepfake website or a competitor.

Image of Google search bar

Microsoft was the only company that didn’t respond to NBC News’ request for a comment. A DuckDuckGo spokesperson said its primary source for web links and image results is Bing. Also, DuckDuckGo offers a way to send a privacy-related request, for example asking specific content to be deleted from a search.

Google offers a similar option for deepfake victims, but NBC News argues – citing experts and activists – that this is not enough. Additionally, it can cause further harm to the victims who are told to send specific URLs with the sensitive content for review.

On the other hand, Google has put at least some technological measures in place. It claimed for NBC News that it automatically searches for duplicated content to remove reuploaded imagery that had already been flagged.

This is just one of many negative examples of AI being used for non-ethical or malicious purposes. Deepfakes, specifically, are an issue in many other areas too – for example in politics. Microsoft is one of the companies trying to develop a technology for deepfake detection to prevent the misuse of technology ahead of the elections.

Separately, the FTC is searching for useful ideas to detect fake audio recordings created by artificial intelligence. Today is a deadline for submissions for its Voice Cloning Challenge with a main prize of $25,000 for the overall winner.

Report a problem with article
EU flag with Microsoft logo on the left
Next Article

Microsoft will now keep all European personal cloud data within its EU data boundary

A late prototype of the Windows 10 stock wallpaper found in a leaked Windows 11 build
Previous Article

Microsoft is working on new lock screen features for Windows 10

Join the conversation!

Login or Sign Up to read and post a comment.

5 Comments - Add comment