Jon Goodey, an SEO consultant and AI newsletter creator, deliberately published a LinkedIn article containing an AI hallucination — a fabricated story about a nonexistent "Google March 2026 Core Update" — and watched it rank on the first page of Google for the query "Google March update 2026", according to his own account reported by Search Engine Journal on March 18, 2026. Worse: Google's AI Overviews picked up the misinformation and presented it as established fact.
The experiment wasn't malicious — Goodey wanted to test how false information spreads through search in the AI era. The results are damning for Google.
What Happened Step by Step
Goodey runs an AI-assisted content creation workflow that includes a human quality control layer. During a newsletter drafting session, his AI hallucinated: it fabricated details about a Google algorithm update that never happened. Instead of correcting it, Goodey chose to publish the LinkedIn article deliberately to observe what would unfold.
The outcome was unambiguous:
- The LinkedIn article ranked on the first page of Google for "Google March update 2026" — not buried on page three, right there at the top
- Google AI Overviews picked up the fake information and presented it as factual to anyone asking the question
- Several independent SEOs republished the false information without fact-checking, amplifying its spread
The Systemic Problem
This test exposes a structural flaw in how Google Search works in 2026. The ranking system classifies content on relevance, popularity, and domain authority signals — not on factual accuracy. LinkedIn carries strong domain authority. The article was optimized, even unintentionally, for the target query. Result: fabricated information outranked factual analysis from lesser-known sites.
AI Overviews amplify the problem. Google's AI systems synthesize available sources without validating them. If the top page-1 sources say a "Google March 2026 update" exists, the AI Overview will repeat it — lending the error additional apparent authority.
What Businesses Should Take Away
This experiment has two concrete implications for your content strategy:
- Don't trust Google searches for SEO information — The SEO market is particularly vulnerable because nobody can directly "test" whether a Google update actually happened. Always verify on the official Search Central Blog before acting on SEO news found via search. Strengthen your E-E-A-T signals so your content is identifiable as trustworthy.
- AI in your content workflow requires human oversight — AI writing tools hallucinate regularly, especially on recent events. A workflow without human verification is a misinformation factory.
The real lesson: This isn't about Google being "bad." It's that the ranking system was never designed to verify truth — only relevance and popularity. In a world where AI generates content at scale, this limitation becomes critical. Factual, sourced, expert-signed content becomes a powerful competitive differentiator. EEAT isn't just a guideline — it's your moat.
Cicero's Take
This is the strongest argument yet for investing in genuine expert content. When AI misinformation floods search results, sources that demonstrate real E-E-A-T signals stand out. This isn't just ethical — it's commercial strategy. Pair that with a solid SEO content strategy and you build a moat competitors can't easily replicate.
Growth and SEO content strategy specialist, I founded Cicéro to help businesses build lasting organic visibility — on Google and in AI-generated answers. Every piece of content we produce is designed to convert, not just to exist.
LinkedIn