📖 In This Issue

  • Featured Snippets: (News & Resources)

  • Cover Story: GEO = SEO, is Correct. It’s Also a Trap

  • Operator of Interest: Cindy Krum

  • Learn This: Context Window

📰 Featured Snippets (News & Resources)

BuzzStream writes about how to transition from link building to digital PR. Which can have huge impact on LLM visibility.

Google’s AI Mode now opens links in a parallel side panel in Chrome. The side panel display in Chrome has been a feature for awhile, but activating it without explicit user action, is new.

A new report from Adobe says that AI traffic is growing across all industries. While I am sure this is true, all of the first hand data I have, says that it is still minuscule compared to other channels.

StatCounter data from March suggest that Gemini is the 2nd most referring LLM on the market now. I wouldn’t be surprised if Gemini’s actual usage is a lot higher but is under reported in most analytics tools.

GEO = SEO is Correct. It’s Also a Trap.

On paper, “Generative Engine Optimization” often looks like traditional SEO plus a layer of PR, repackaged for a new acronym cycle. If you’re an in-house SEO team, your first instinct is healthy: call out the snake oil. Most “GEO playbooks” quietly smuggle in a promise they can’t keep, which is certainty about a system that changes by the quarter.

But the people pushing GEO aren’t hallucinating a new universe either. LLMs are a new distribution surface. Answers are increasingly the product, not the ten blue links. And that changes the practical question from “How do we rank?” to “How do we show up in the answer; especially when the interface doesn’t need to show links?”

That tension is the whole story. You can be right that GEO overlaps heavily with SEO and still be exposed the moment the plumbing changes.

AI is a multiplier. If your strategy is thin, it scales thinness.

Why everyone is right (and what they miss)

SEOs are right about incentives

If your brand consistently earns visibility in search results, you are still buying the most reliable form of distribution on the internet. Even as products shift toward answer-first experiences, “being easy to crawl, easy to trust, and widely cited” remains the shortest path to being used as a source when a system does ground itself in the web.

That matters because many consumer AI experiences still lean on search results or hybrid workflows in one form or another. OpenAI’s own documentation for ChatGPT search emphasizes returning answers “with links to relevant web sources,” which is a very SEO-shaped incentive structure.

And the SEO camp is also right that new acronyms often sell confidence where none exists. A lot of “GEO tactics” are just good SEO fundamentals with a new coat of paint and a higher invoice.

GEOs are right about system behavior

LLMs don’t behave like a ranked list. They don’t “present results,” they synthesize an answer. Sometimes that answer is grounded. Sometimes it’s not. Sometimes it shows citations. Sometimes it doesn’t. And the “why” can change based on the model, the prompt, the product UI, and business decisions you don’t control.

Even within Google’s own answer-first surfaces, the amount of attribution can vary. Google has publicly acknowledged cases where AI Overviews weren’t showing links due to a bug, which is a small but revealing example of the broader point: even if the system has sources, the interface can choose to suppress them (or accidentally fail to display them).

And when the UI leans harder into “answer-first,” click behavior follows. Pew Research found that when an AI summary appeared in Google results, users clicked traditional search result links less often than when no AI summary appeared.

If your job is traffic, that’s not philosophical. That’s a distribution change.

What both miss

The shared blind spot is treating “LLMs rely on search” like a permanent law of nature. It’s not. It’s a product choice, a partnership choice, an access choice, and sometimes a legal choice.

If your plan only works when a model is actively retrieving from live SERPs, your plan is fragile by design. You’re optimizing for today’s plumbing, not tomorrow’s reality.

The trap: “GEO = SEO” breaks when grounding weakens

When people say “just do SEO,” they usually mean “keep earning visibility in search and the AI stuff will follow.” That can be true today. It can also fail fast. Here’s where the floor can drop out.

Risk 1: Access risk (the pipe can get shut off)

The assumption behind a lot of GEO measurement is: “LLMs will always be able to observe SERPs at scale.” That assumption is increasingly shaky.

Google has been tightening controls that make automated access to SERPs harder, and the industry has documented periods where SERP scraping became more brittle and required heavier JavaScript execution and more sophisticated anti-bot handling.

Then there’s the legal layer. In December 2025, Google filed suit against SerpApi, alleging unlawful circumvention of technical measures and massive-scale scraping of Google results. Reuters’ coverage captures the key point for in-house teams: the fight is not just technical; it’s legal and commercial, and it can reshape what data access looks like overnight.

Even if you never scraped anything yourself, the LLM ecosystem depends on this access existing. If the pipe narrows, your LLM “visibility” narrows with it.

Risk 2: Product risk (answer-first interfaces reduce link dependence)

Even if search grounding remains, the interface can decide how much it wants to reward the open web.

Google has been iterating on AI Mode and how it presents sources, including experiences where links open side-by-side rather than sending users away. That’s a reasonable UX improvement, but it’s also a reminder: the platform controls the flow. You don’t.

And if the interface reduces the need to click, your rankings can remain stable while your outcomes degrade. That’s not a failure of SEO craft. It’s a shift in how value is distributed.

Risk 3: Measurement risk (you can’t defend what you can’t observe)

Traditional SEO has mature instrumentation. You can argue about attribution models, but at least you have logs, crawls, rankings, impressions, clicks, and diagnostics you can put in front of leadership.

“LLM visibility,” on the other hand, is still a measurement swamp for most in-house teams. Answers vary by user, prompt, context, and model version. Citations can appear and disappear. Interfaces change.

That leads to a predictable failure mode: teams over-invest in tactics they can’t validate, because the story sounds good and nobody can disprove it cleanly.

Risk 4: Strategy debt (optimizing rank while ignoring what models say)

If you only optimize “rank,” you miss the parts of the ecosystem that shape what models repeat even when they don’t retrieve. Models and answer systems don’t just reflect what ranks; they reflect what’s widely cited, consistently phrased, and easy to reuse without losing meaning.

That’s the trap. “GEO = SEO” is correct about overlap, but wrong as a strategy boundary.

The fix: a two-layer visibility strategy

Here’s a mental model that survives product churn: treat visibility as two layers you deliberately build in parallel.

The first layer is the Search Layer. This is classic SEO infrastructure: crawlability, indexing, technical health, authority, and the ability to earn and keep rankings. It still matters because it’s the most dependable distribution channel and because many answer systems still borrow signals from the open web.

The second layer is the Corpus Layer. This is everything that makes your information easy for models (and humans) to retrieve, trust, paraphrase, and reuse even when a live SERP isn’t doing the work for you.

In practice, you keep SEO strong and you add a second track that assumes some answers won’t be grounded in live rankings.

What Corpus Layer work looks like (without GEO theater)

Corpus Layer work isn’t magic. It’s publishing and positioning your information so it becomes a “reference-shaped object” in the ecosystem.

You start by earning high-signal mentions in places that behave like knowledge infrastructure: well-moderated forums, Q&A communities, technical documentation, and research-adjacent hubs. The point isn’t “spray the brand everywhere.” The point is to participate where credibility has friction. If you can’t sustain contribution without spamming, you’re not building an asset—you’re building a liability.

Next, you stop thinking in terms of “pages” and start thinking in terms of citable passages. The unit you want to win is a paragraph that can survive copy/paste into an answer: a clear claim, a grounded explanation, and enough context that it doesn’t mislead when detached from the rest of the article. This is the difference between “content” and “reference material.”

From there, publish canonical brand definitions you can actually own. If a term is emerging and clarity is missing, define it early and precisely, then reinforce that definition consistently across your docs, blog, and internal enablement content. The guardrail is simple: don’t coin terms for attention. Coin terms only when you’re solving confusion you can point to in the real world.

Then you cover query fan-out paths: the adjacent questions that inevitably follow the first question. Answer systems love to generalize. If your content anticipates the edge cases, objections, and follow-ups, you reduce the chance that the model fills the gaps with someone else’s framing (or with nonsense). Your best compass here isn’t keyword tools; it’s internal search logs, support tickets, sales call objections, and “why didn’t this work?” threads.

You also let yourself be opinionated where it matters, as long as it’s attributable and defensible. Strong stances get synthesized and repeated because they provide a crisp decision rule. The guardrail is time: if you can’t stand behind the opinion six months later, it’s not a stance; it’s a marketing spike.

Write for paraphrasing durability. Models summarize. People summarize. Slack summarizes. If your meaning collapses when compressed, you’re building fragile visibility. Use clear claims, avoid ambiguity, and don’t rely on clever phrasing to carry the point.

Finally, embed your brand in workflows, not just topics. “What is X?” content is fine, but procedural knowledge is stickier: “How do I do X safely?” and “What are the failure modes?” If you can become the reference for doing the thing, not just defining the thing, you become harder to displace.

The highest-leverage version of this is owning under-documented niches. Go deep where the web is thin and messy. Models struggle there, and so do competitors. It’s the opposite of competing on the obvious head term. It’s also the kind of work that still pays off even if the interfaces change again.

Guardrails for in-house teams (how to avoid GEO theater)

The simplest test is blunt: if a tactic can’t be explained in plain language, don’t deploy it. “Because the model likes it” is not an explanation; it’s a superstition.

Prefer strategies that simultaneously improve SEO fundamentals and the reusability of your information. If it makes crawling, indexing, rendering, trust, and comprehension better, it’s usually safe. If it only makes a dashboard look exciting, it’s usually theater.

And don’t optimize for attention. Optimize for outcomes you can defend in a quarterly review when the product UI has changed twice and leadership is asking why you spent time there.

What to tell your boss

Yes, GEO overlaps heavily with SEO. No, that doesn’t mean SEO alone is sufficient. The plan is two-layer: protect the Search Layer, build the Corpus Layer, and avoid tactics you can’t measure or explain.

Because the moment grounding weakens, or the UI decides links are optional, you don’t want to discover your “GEO strategy” was just SEO with a costume.

👤 Operator of Interest: Cindy Krum

  • Known for: Predicting major search trends, keynote speaking, and mobile SEO.

  • Works at: MobileMoxie

  • Follow: LinkedIn

Learn This:

Context Window: The amount of text an AI model can consider at one time.

One more thing: AI is only as good as it’s operator, and if you are reading this newsletter, you are better than most!

Till next time,

Joe Hall

PS: Let me know what you think of this issue, or anything else here: [email protected]

Keep Reading