Fabricated Google core update ranked in Search and AI Overviews

Summary

An SEO practitioner published a fabricated Google core update on LinkedIn to test misinformation spread, and it ranked page one while being cited by AI Overviews as fact.

This shows Google Search has no fact-checking layer for most queries, and AI Overviews can launder false claims into authoritative-sounding summaries that practitioners may trust without verification.

Check the Google Search Status Dashboard for confirmed updates only, audit your AI content workflows for accuracy before publishing, and treat AI Overviews answers on SEO topics with skepticism.

What happened

SEO practitioner Jon Goodey published a fabricated “March 2026 Google Core Update” in a LinkedIn newsletter to test how easily misinformation spreads through Google Search. Search Engine Journal reported that the fake update ranked on page one for “Google March update 2026” and was picked up by AI Overviews as fact.

Goodey explained in a subsequent LinkedIn post that his AI-assisted newsletter workflow caught a hallucination about a nonexistent core update. Instead of correcting it, he published it deliberately to see whether anyone would challenge the claim.

Nobody did. Multiple independent SEO sites published detailed articles treating the update as confirmed. According to SEJ’s coverage, these weren’t thin posts. They included invented technical details like “Gemini 4.0 Semantic Filters,” an “Information Gain” metric, and recovery strategies for an update that never happened. A technology site called TechBytes published a piece headlined “Google March 2026 Core Update: Cracking Down on ‘Agentic Slop’” with fabricated specifics about a “Zero Information Gain” classification system and a “Discover 2.0 Engine.”

Major search marketing publications including SEJ did not cover the fake update.

A real March 2026 core update did eventually roll out on March 27, according to the Google Search Status Dashboard. Goodey’s experiment predated it.

Why it matters

Google’s search results have no meaningful fact-checking layer for most queries. The experiment shows that a single LinkedIn article containing AI-generated misinformation can reach page one and feed directly into AI Overviews, where it gets presented without qualification.

For SEO practitioners, the implications cut two ways. First, the SEO information you find through Google Search may itself be fabricated. SEJ’s Roger Montti compared searching for SEO information on Google to “playing a slot machine.” Second, if you publish AI-assisted content without human review, you risk becoming part of the misinformation chain.

The AI Overviews angle is particularly concerning. Classic search results at least show source URLs that a reader can evaluate. AI Overviews synthesize claims into authoritative-sounding summaries. When the underlying source is fabricated, the AI Overview launders the misinformation into something that looks like consensus.

Google has explicitly declined to integrate fact-checking into its search results. SEJ’s article references an Axios report in which Google’s global affairs president Kent Walker told the European Commission that fact-checking integration “simply isn’t appropriate or effective for our services.”

Google does support ClaimReview structured data that lets fact-checkers annotate claims. However, Google is phasing out support for ClaimReview markup in Search results, according to its own documentation. The markup remains supported only in the Fact Check Explorer tool. The direction of travel is away from structured fact-checking in SERPs, not toward it.

What to do

Verify update claims against primary sources. The Google Search Status Dashboard lists every confirmed algorithm update with dates and durations. If an update isn’t listed there, treat it as unconfirmed. Google’s official Search Central blog and social accounts are the only other primary sources.

Audit your AI content workflows. If you use AI to draft content, build a verification step that checks factual claims against primary sources before publishing. Goodey’s experiment worked precisely because other publishers skipped this step.

Be skeptical of AI Overview answers for SEO queries. AI Overviews can surface and synthesize misinformation from a single low-authority source. Cross-reference any AI Overview claim about algorithm updates, ranking factors, or technical SEO against official Google documentation or established industry publications.

Don’t rush to publish update coverage. The sites that got burned were chasing traffic from a trending query. Waiting even a few hours to verify against the Search Status Dashboard would have prevented publishing fabricated content.

Watch out for

AI hallucinations about Google updates are plausible by default. Google releases updates frequently enough that a claim about a new one rarely triggers skepticism. Your AI writing tools have been trained on years of update coverage and can generate convincing fake details, complete with invented feature names and timelines.

LinkedIn content ranks surprisingly well for informational queries. Goodey’s article wasn’t on a high-authority SEO domain. It was a LinkedIn newsletter post. LinkedIn’s domain authority can push even low-effort content onto page one for queries without strong competition, which makes it an effective vector for misinformation.