When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works.

Google's new AI Overviews in Search are already generating major factual errors [Update]

Googles commitments to end data case

Update - We received this statement from a Google spokesperson:

The vast majority of AI Overviews provide high quality information, with links to dig deeper on the web. Many of the examples we’ve seen have been uncommon queries, and we’ve also seen examples that were doctored or that we couldn’t reproduce. We conducted extensive testing before launching this new experience, and as with other features we've launched in Search, we appreciate the feedback. We're taking swift action where appropriate under our content policies, and using these examples to develop broader improvements to our systems, some of which have already started to roll out.

Original story - Earlier this month, Google held its annual I/O developers conference. As expected, AI was at the top of the company's agenda during the event as it revealed new and upcoming generative AI features for a number of its products and services.

One of those features announced at I/O is AI Overviews for Search. It started rolling out for US Search features soon after I/O. The idea is that when you type in a search string, you will get an answer to your question generated by an AI program based on information from a number of sources on the web, with credits to those sources.

However, as CNBC reports, since AI Overviews has started to be a part of Google Search in the US, many people have posted obvious factual errors that the feature had generated.

One person, "@heavenrend" on X, showed that when they typed in "cheese not sticking to pizza" into Google Search, the AI Overview summary suggested putting in "about 1/8 cup of nontoxic glue to the sauce."

Another X user, "@napalmtrees," asked if it is normal to leave a dog in a hot car. The AI Overview summary from Google Search said simply, "Yes, it’s always safe to leave a dog in a hot car." The apparent source of this flat-out wrong answer was the lyrics to a fictional Beatles song.

The criticism of AI Overview sounds very similar to how Google launched its first AI chatbot, Bard, in March 2023, which was criticized for launching without enough safety and ethical guardrails. In February 2024, Google turned off the creation of images of people in Bard's successor, Gemini. That feature allowed the making of artwork that sometimes showed people with darker skin color in inappropriate ways. At the time, Google stated it would work to improve the feature before it was put back in Gemini, but so far, that has yet to happen.

Report a problem with article
Western Digital Red SA500 Internal SATA SSD
Next Article

2TB WD Red SA500 NAS SSD is still available at its lowest price on Amazon

Excel logo with green background
Previous Article

You will soon be able to use Copilot for more chats and interactions with Excel

Join the conversation!

Login or Sign Up to read and post a comment.

9 Comments - Add comment