📖 In This Issue
Featured Snippets: (News & Resources)
Cover Story: The SEO Strategy LLMs Love (That Nobody Talks About)
Operator of Interest: Will Scott
Learn This: Retrieval-Augmented Generation (RAG)
📰 Featured Snippets (News & Resources)
Google is now testing AI rewritten headlines in the SERPs. To me this isn’t that new, they have rewritten title tags for years prior to AI.
The Wall Street Journal tries to tackle the changes to Google and SEO. Their take is a bit over simplified, but it’s fun to see SEO get more attention in mainstream media.
Google’s AI Overviews are about to get a lot more “personal” with the inclusion of Personal Intelligence, which aims to personalize answers based on what Google knows about you.
Glenn Gabe uncovers links from ChatGPT inside Google’s Search Console link reporting. The links are from shared chats, and could be a new way to track real LLM visibility.
The SEO Strategy LLMs Love
AI Rewards Boring SEO Work More Than New Tactics
AI Overviews, “AI answers,” and chat-based search are changing where attention goes. That part is obvious. The less comfortable part is this: if clicks and credit are being redistributed by systems that summarize, why are so many teams spending their limited cycles chasing new content formats instead of fixing the parts of the site those systems actually depend on?
Because the uncomfortable truth is that “AI visibility” often has less to do with novelty and more to do with whether your information is easy to retrieve, trust, and reuse at scale. AI is not a shortcut; it’s a multiplier of existing quality or existing debt.
Foundational SEO Is Key
Foundational SEO work tends to show up more in AI-driven surfaces than experimental tactics. Clean URLs, stable templates, updated evergreen pages, consistent internal linking, predictable rendering. Not because it’s exciting. Because it reduces ambiguity and increases confidence.
Google has been unusually direct about this in its guidance for AI features: the same foundational SEO best practices still matter, and there aren’t special “AI optimization” requirements beyond being eligible for Search in the first place. In other words, you don’t unlock AI visibility with a new trick. You earn it by being indexable, parsable, and reliable.
And if you need a reminder that the distribution really is changing: Pew’s analysis of Google searches in March 2025 found that users were less likely to click when an AI summary appeared, and that AI summaries frequently cite multiple sources (usually three or more). The game is not only “rank and get the click” anymore. It’s also “be one of the sources the system feels safe citing.”
AI systems amplify quality and debt
LLM-driven surfaces don’t “read your site” the way humans do. They ingest what they can access reliably, then stitch together an answer that sounds coherent. That means inconsistency isn’t just a mild ranking tax. It’s a credibility penalty.
Template churn, duplicate URLs, stale pages that contradict your newer pages, fragile JavaScript rendering that delays or hides key text, these don’t just make you harder to crawl, they make you harder to summarize without making mistakes. Google’s own JavaScript SEO documentation spells out that processing happens in phases (crawling, rendering, indexing), and that link discovery and indexing depend on what Googlebot can parse and render. If your important content is effectively “late,” you’re asking the system to build on unstable ground.
Novel tactics don’t fix retrieval
A lot of “AI-first” experimentation adds complexity while pretending it’s progress. New interactive formats, novelty hubs, heavy schema experiments, multiple page variants “for LLMs,” JavaScript layers that reshape the DOM after load, these often increase crawl and render work, multiply URL variants, and create more editorial surfaces to keep consistent.
Even when they show a short-term bump, they can create the exact kind of maintenance debt that AI surfaces punish over time. Google’s canonicalization guidance is blunt about how easily sites confuse themselves: inconsistent canonical signals, internal links pointing to duplicates, and client-side rendering that changes canonical tags can all muddy the picture. The guidance even calls out that if you’re using client-side rendering, you should keep canonical information as clear as possible and avoid having JavaScript rewrite it.
“We look visible, but we’re not reusable”
It’s possible to feel like you’re “in the mix” because impressions are up, your brand gets mentioned, or you occasionally appear in a cited list. But that’s not the same as being the source the system leans on.
AI systems prefer content that stays stable over time, is easy to decompose into facts, and is internally consistent across pages. They also reward clarity around ownership and maintenance because that reduces the chance they’re amplifying something outdated. Google’s AI features documentation explicitly points back to fundamentals like internal linking, textual availability of important content, and keeping business information up to date, because those are the parts machines can validate and reuse.
What “boring SEO” means in an AI era
Foundation 1: Crawl hygiene is retrieval hygiene
One page should resolve to one canonical URL. Not “mostly one,” not “one unless filters are applied,” not “one except the tracking parameter version that got linked in a newsletter and now ranks.” One.
This isn’t aesthetic. It’s an ambiguity problem. When you let parameterized duplicates and near-identical variants proliferate, you aren’t giving AI systems more content. You’re giving them more contradictions. Google’s duplicate URL consolidation guidance frames canonicalization partly as a way to avoid wasting crawl resources on duplicates and to help search engines understand your preferred version. It also emphasizes a simple operational point teams routinely violate: link internally to the canonical URL, consistently, because that’s a signal you control every day.
Foundation 2: Stable templates enable stable extraction
If your headings, nav patterns, FAQs, and primary content blocks move every quarter, you are training parsers to distrust you. The layout doesn’t have to be boring. It has to be predictable.
When systems extract and summarize, they’re trying to answer basic questions quickly: what is this page about, where is the primary answer, what is supporting material, and what changed recently. Stability makes those answers easier. Churn makes everything look like “maybe.”
Foundation 3: Evergreen maintenance is trust maintenance
LLMs love to repeat quietly wrong information. They also love pages that look maintained, because a maintained page is less likely to embarrass them.
So maintenance can’t be “when someone remembers.” It has to be scheduled. Your highest-value evergreen pages should get regular fact checks, refreshed examples, and cleaned-up definitions. Dead pages should be retired or redirected. Competing older pages should be consolidated so you stop disagreeing with yourself in public.
Foundation 4: Internal linking that reflects your real worldview
Internal links are your site’s map of meaning. In an AI era, that map matters more, not less, because it reduces uncertainty about which pages are central and how topics relate.
Google calls out internal linking explicitly as part of making content easily findable. That’s not just about PageRank flow. It’s about building a coherent knowledge structure: clusters that match how users actually ask questions, anchors that don’t reinvent themselves every time a writer gets creative, and key pages that aren’t effectively orphaned behind faceted navigation or JavaScript-only pathways.
Ok, but somethings have to adapt, right?
When new tactics really do help AI visibility
New tactics can be worth it when they reduce ambiguity. If a “new” approach gives you clearer page structure, better entity disambiguation, simpler rendering, or cleaner semantics, it’s not really novelty. It’s infrastructure work wearing a new outfit.
New tactics also help when they produce genuinely reference-worthy material. Original data, benchmarks, definitions that people quote, step-by-step processes that are hard to find elsewhere. Systems that summarize still need something worth pulling, and uniqueness is still a moat.
When boring work is not enough
A perfect technical foundation won’t save undifferentiated content. Maintenance can make you a cleaner version of “same as everyone,” and AI surfaces already have plenty of that.
Infrastructure earns you eligibility and reliability. It does not automatically earn you preference. You still need actual expertise, original insight, and a point of view that holds up when someone stakes an answer on it.
What to do next
Here’s the prioritization rule that holds up under pressure: if the work improves crawlability, consistency, or confidence, it’s probably an AI visibility win. If it adds complexity without improving those, it’s probably theater.
Run a short sprint on the pages that matter most. Pick roughly 20 URLs that represent the majority of your non-brand opportunity. For each page, verify that the canonical is correct and that duplicates are either consolidated or clearly deprioritized. Confirm the template is consistent enough that the primary content is obvious and stable, including headings and any author or date signals where relevant. Refresh the core answer so the page is factually current and plainly written. Add internal links in both directions to adjacent topics so the page sits inside a clear cluster instead of floating alone. Then remove or redirect older competing pages so you stop feeding the ecosystem multiple versions of your truth.
If you need a line for stakeholders, keep it simple: We’re not avoiding innovation. We’re fixing the parts of the site AI systems depend on to trust us at scale.
👤 Operator of Interest: Will Scott

Known for: SEO, AI Consultant, & Speaker
Works at: Search Influence
Follow: LinkedIn
Learn This:
Retrieval-Augmented Generation (RAG): A method where AI retrieves relevant data before generating responses.
One more thing: AI is only as good as it’s operator, and if you are reading this newsletter, you are better than most!
Till next time,
Joe Hall
PS: Let me know what you think of this issue, or anything else here: [email protected]

