Why are these FAKE phrases fooling Google’s AI Overviews Background

Why are these FAKE phrases fooling Google’s AI Overviews?

Why are these FAKE phrases fooling Google’s AI Overviews

Why are these FAKE phrases fooling Google’s AI Overviews?

Have somebody ever said a phrase to you in conversation that you’ve never heard before and makes zero immediate sense to you, but you nod along with them anyway and pretend you know what they’re talking about?

If so, you have something in common with Google’s AI Overview search engine feature!

Of course, there is one small difference between you in your conversation and Google – it’s not your job to provide a full explanation of such a saying’s meaning, even if the phrase is completely made up.

Apparently, that is AI Overview’s responsibility, and it carries it out with quite remarkable confidence when it detects a series of words that could be construed as a phrase that people actually say.

For example, it was discovered that when asked to explain the non-existent saying “You can’t lick a badger twice”, Google’s AI Overview provided an entire context behind a phrase with no recorded existence in society.

Someone on Threads noticed you can type any random sentence into Google, then add “meaning” afterwards, and you’ll get an AI explanation of a famous idiom or phrase you just made up. Here is mine

Greg Jenner (@gregjenner.bsky.social) 2025-04-23T10:15:15.706Z

Seizing upon the opportunity to trick Google’s AI, search users successfully managed to receive explanations for phrases such as:

– “Making the milk sizzle”
– “Stringing beans for pleasant means”
– “You can take your dog to the beach but you can’t sail it to Switzerland”
– “It’s better to have a tentacle in the tent than a rat in the rattan chair”

Can we trust what AI tells us?

Despite the obvious entertainment value in a revelation like this, we are left to wonder just how reliable artificial intelligence (AI) chatbots are when we use them to receive answers to our questions.

Most popular AI chatbots, including Google’s Gemini, as well as ChatGPT from OpenAI, openly clarify that its answers and results will not be 100% accurate at all times. However, when its answers are delivered in such a matter-of-fact manner, like the AI Overviews for the above fake phrases are, it raises genuine concerns that AI is, for lack of a better term, programmed to tell us what we want to hear on occasion, rather than simply what is correct or incorrect.

With generative AI still in its developmental phase, it will be interesting to see if and how AI leaders such as Google and OpenAI manage to navigate the potential for misinformation that its products are currently suffering from.

Here at Engage Web, we stay on top of the latest developments in search engines, including the ever-growing influence of artificial intelligence. If you want your website to get the best results on platforms like Google, just reach out to our team now to get started.

Luke Meredith

Get in touch

Please enable JavaScript in your browser to complete this form.
Acceptance

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

>