📖 In This Issue
Featured Snippets: (News & Resources)
Cover Story: What GEO Actually Inherits from Traditional SEO
Operator of Interest: Dan Petrovic
Learn This: Crawl Budget
SEO For LLMs (GEO/AEO) Checklist
📰 Featured Snippets (News & Resources)
Will Scott shows us how to build a SEO “command center” in Claude Code. I am usually very skeptical of these sort of articles, but Will really nails the execution with the right expectations.
Malte Landwehr wrote a great guide for deciding which prompts companies should focus on for LLM tracking.
Nikki Pilkington lays out common risk patterns with “AI SEO hacks” that will likely f*ck up your traditional SEO.
Google is rolling out it’s AI Canvas product inside SERPs that use AI Mode. The move will likely increase time spent in the results, and decrease clicks to other web sites.
What GEO Actually Inherits from Traditional SEO
Everyone’s asking the same question right now: “How do we rank in ChatGPT, AI Overviews, or Perplexity?”
But the more useful question is quieter, and a little more uncomfortable: what parts of generative AI visibility are still downstream of classic SEO infrastructure?
Because “SEO for AI” isn’t a brand-new discipline as much as a new consumer of the same old inputs: crawlable pages, interpretable content, and earned trust signals. The interface changed. The plumbing didn’t.
The inheritance nobody wants to admit: crawlability still gates everything
If a system can’t reliably fetch and parse your pages, you don’t get to be “understood.” You get to be guessed about.
In traditional SEO, the story is familiar: discovery leads to crawling, rendering, indexing, and ranking. In an AI-shaped world, there’s another layer: retrieval and grounding. Tools like Perplexity explicitly center citations in the product, pulling from sources to back answers. Google’s AI Overviews also present AI-generated snapshots with links to “dig deeper,” which is a polite way of saying the model is downstream of what it can access and reference. ChatGPT’s search experience similarly points users to sources via a dedicated Sources view.
That means crawlability now has more stakeholders. It’s not just “Can Googlebot render this?” It’s also “Can retrieval systems fetch the canonical version quickly, consistently, and without getting trapped in your infrastructure?”
This is where things break under load. JavaScript-heavy rendering can still introduce delays and inconsistencies, especially when key content is dependent on client-side execution. Google has been explicit that JavaScript processing exists, but it has constraints and best practices for a reason. Add bot mitigation that blocks unknown agents, unstable URL patterns, inconsistent canonicals, or fragmented duplicates—and you get the modern symptom: “We’re visible in search, but we’re missing in AI answers.”
Teams often treat that symptom like a prompting problem. It’s usually retrieval friction.
The risk is straightforward: you chase “AI visibility hacks” while the real issue is basic access. Blocked endpoints. Broken rendering parity. Templates that hide the answer behind an interaction. Duplicate clusters that confuse which page is “the source of truth.”
The takeaway is boring, and that’s why it works. Treat AI visibility like technical SEO with new stakeholders: search crawlers and retrieval crawlers. Fix the pipes first.
Content optimization evolves from “keywords” to “extractability”
LLMs don’t “rank” a page the way a SERP does. But they still have to pull something cleanly into an answer, especially in experiences that attach citations and source links.
Traditional on-page SEO still matters here, not because the model is counting keywords, but because structure reduces ambiguity. Clear topic focus. Consistent terminology. Strong internal linking that reinforces the page’s place in the site’s information architecture. A scannable layout that makes it obvious where definitions, constraints, and examples live.
What changes is the optimization target. You’re not just matching query terms anymore. You’re trying to make the correct interpretation the path of least resistance.
That means definitions that don’t contradict themselves three scrolls later. Sections that answer sub-questions directly, without burying the lead. Comparisons that specify what’s true, what’s not, and under what conditions. Caveats that are explicit instead of implied.
AI doesn’t magically fix weak content. It multiplies it. Thin pages become confident-sounding nonsense when summarized. Conflicting pages become hallucination fuel when the system tries to reconcile them into one “helpful” paragraph.
The takeaway is to write for extraction. Make the “right answer” the easiest chunk to lift without losing meaning. If a model or an answer engine is going to quote you, help it quote the part you’d actually stand behind.
Off-site signals still matter because “trust” is not optional
Generative systems don’t invent authority from scratch. They infer it from what the open web already signals: corroboration, reputation, and consistency across independent sources.
Perplexity, for example, frames itself around “trusted, real-time answers” with citations that let users verify the underlying sources. Google’s AI Overviews likewise emphasize links so users can explore and validate. These products don’t remove trust. They operationalize it.
Off-site trust signals still look like what you’d expect, just with higher consequences. Mentions in respected publications. References from experts. Reviews and community consensus. Affiliations that are easy to verify. Original research that other people cite because it’s useful, not because you asked them to.
Here’s the both-sides reality that matters for in-house teams. Yes, you can sometimes appear in AI answers without being a top-three ranking page, because systems may assemble responses from multiple sources and retrieve supporting evidence outside the classic “winner takes the click” dynamic. And also yes, brands with durable authority are more likely to be used as sources, because reputation makes it safer for a system to lean on you—especially when citations are visible to users.
The risk is mis-allocation. Over-investing in “AI content” while under-investing in credibility building: PR, partnerships, expert-led work, and publishing things that hold up when other people scrutinize them.
The takeaway is the oldest rule in SEO, wearing a new UI. Trust is earned off the page, then proven on it.
New interface, the same process: retrieval changes the failure modes
When answers are assembled from multiple sources, “winning” looks different, and so do the ways you can lose.
One new failure mode is being used but not credited. Your content influences the answer, but no citation appears, or a competing source gets the visible link. Another is being cited for the wrong claim, where your page becomes the convenient reference for something you didn’t actually say. A third is freshness conflict: an older page of yours contradicts a newer industry source, and the system chooses the other narrative. Products that foreground sources make this more visible, not less.
This is where traditional SEO infrastructure still saves you. Strong canonicals help systems land on the correct “source of truth.” Stable URLs reduce the odds that an older, deprecated version keeps circulating. Consistent entity naming reduces mis-attribution. Clear authorship and policy pages make it easier for evaluators (human or automated) to decide you’re reputable.
The risk is measurement blindness. Teams track “did we get a citation” and miss the deeper problem: brand inaccuracies at scale. In an AI-mediated world, one wrong interpretation can propagate faster than your correction.
The takeaway is to optimize for correct representation, not just inclusion. The new KPI isn’t only visibility. It’s whether the system repeats you accurately.
What your in-house team can actually do this quarter
Start where the leverage is highest: access, clarity, consistency, trust.
On crawl and access, treat retrieval as a first-class customer. Review robots directives, server headers, and bot mitigation rules with the assumption that not every legitimate fetcher looks exactly like Googlebot. Validate that your important templates render meaningful content without fragile client-side dependencies, and that resources required for understanding aren’t blocked in ways that create partial or misleading renders. Use your logs and your own testing to confirm what automated agents can actually fetch, not what your browser can display. Google’s own JavaScript SEO guidance is a good reminder that rendering and indexing have constraints that show up as real-world visibility gaps.
On extractability, tighten structure before you add volume. Add direct-answer sections where it makes sense, not as a gimmick, but as a clarity tool. Simplify definitions so they can be quoted without needing three paragraphs of context. Resolve contradictions across templates and overlapping pages; if your own site can’t agree with itself, the model won’t either.
On entity and trust, make it easy to verify who you are and why anyone should trust you. Strengthen about pages, author bios, editorial policies, and citation hygiene. Align brand/entity signals across the web so the same organization isn’t described five different ways depending on which profile a system retrieves.
On off-site credibility, invest in things that generate real citations. Publish research, build tools, collaborate with credible partners, and contribute expertise where the community already pays attention. If you want to show up as a source, you need to be a source.
On monitoring, go beyond “mentions.” Track what pages get cited, which claims get repeated, and where errors propagate. When you find a misrepresentation, don’t just correct it in one place; fix the root ambiguity across the cluster of pages that could be feeding it.
GEO is not a prompt problem. It’s an infrastructure problem.
The teams that win will look boring from the outside. Better crawlability. Clearer content. Stronger trust signals. Tighter consistency. Fewer contradictions. More proof.
That isn’t out-dated. That’s how you keep visibility when the interface changes again.
👤 Operator of Interest: Dan Petrovic

Known for: Founded Australian AI SEO agency.
Works at: DEJAN MARKETING
Follow: LinkedIn
Learn This:
Crawl Budget: The amount of attention a search engine allocates to crawling a site.
SEO For LLMs (GEO/AEO) Checklist
I finally finished writing a comprehensive SEO For LLMs (GEO/AEO) checklist that I can’t wait for you to see! But first I need to ask a small favor: Please share this newsletter with your friends, and I’ll email you this checklist for free as a thank you gift!
Ask your friends to subscribe with this URL: {{rp_refer_url}}
If just one of them subscribes, I’ll email you the free checklist.
One more thing: AI is only as good as it’s operator, and if you are reading this newsletter, you are better than most!
Till next time,
Joe Hall
PS: Let me know what you think of this issue, or anything else here: [email protected]

