When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works.

Google explains why AI Overviews wants you to glue cheese to your pizza

Google AI Overviews

Google announced AI Overviews for Search earlier this month at the I/O 2024 developer conference. When a user types a complex query in Google Search, the feature displays a summary on top of the search results by pulling information from multiple online sources.

However, AI Overviews sparked criticism after people started reporting the feature going haywire, suggesting bizarre and sometimes harmful answers. To the surprise of many, AI Overview suggested that users should add glue to stick cheese to pizza and drink urine to pass kidney stones.

In a blog post, Google explained how the AI-powered feature works and the actions it took after user reports started coming in. The company said the AI Overviews "simply doesn't generate an output based on training data" and works differently than AI-powered chatbots and other LLM products.

Because of the way it works, AI Overviews doesn't make things up on its own or "hallucinate." However, AI Overviews can mess things up by "misinterpreting queries, misinterpreting a nuance of language on the web, or not having a lot of great information available," Google said.

While AI Overviews are powered by a customized language model, the model is integrated with our core web ranking systems and designed to carry out traditional “search” tasks, like identifying relevant, high-quality results from our index.

That’s why AI Overviews don’t just provide text output, but include relevant links so people can explore further. Because accuracy is paramount in Search, AI Overviews are built to only show information that is backed up by top web results.

Google notes that its other Search features have these problems too. The company claims that the accuracy rate for AI Overviews is "on par" with featured snippets, which are also powered by AI.

Talking about the odd results reported on the web, Google claims that not all of them were true and "there have been a large number of faked screenshots shared widely."

Others have implied that we returned dangerous results for topics like leaving dogs in cars, smoking while pregnant, and depression. Those AI Overviews never appeared. So we’d encourage anyone encountering these screenshots to do a search themselves to check.

However, it admitted that AI Overviews "featured sarcastic or toll-y content" by pulling information from discussion forums. "Forums are often a great source of authentic, first-hand information, but in some cases can lead to less-than-helpful advice, like using glue to get cheese to stick to pizza," Google said.

The company said it had made more than a dozen technical improvements to its system and determined the patterns it didn't get right. Among various changes, Google has updated its systems to limit the use of user-generated content in responses that could offer misleading advice.

It has built a mechanism to limit the inclusion of satire and humor content in answers and identify "nonsensical queries that shouldn't show an AI overview." It added triggering restrictions for queries where AI Overviews aren't helpful.

With that said, while you can't disable it completely, there is a way to remove AI Overviews from the search results. When you're on a search results page, click on the More button below the search bar and select the "Web" filter from the drop-down menu. Note that the filter will be removed if you close the browser tab or go to the Google home page.

Image via Pixabay

Report a problem with article
Sitting Tux the penguin mascot of Linux
Next Article

WSL is getting settings GUI, memory, storage, network improvements, and more

spacetop g1
Previous Article

AR laptop Spacetop G1 with 100-inch virtual screen now available for pre-order

Join the conversation!

Login or Sign Up to read and post a comment.

9 Comments - Add comment