📖 In This Issue

  • Featured Snippets: (News & Resources)

  • Cover Story: What I Warn My Clients About "Optimizing for AI Answers"

  • Operator of Interest: Lily Ray

  • Learn This: Multimodal AI

📰 Featured Snippets (News & Resources)

Minas Karamanis wrote a great essay about how over relying on AI, will likely make us (for lack of a better term), dumber.

ChatGPT has enabled location sharing to help better serve “near me“ search queries. Does this make GEO/SEO for LLMs, a bigger reality for Local SEOs?

To be successful at SEO for LLMs it is critical that you have an in depth understanding of the social licenses and partnerships that drive most AI visibility.

The New York Times has an inspiring story of how AI helped build a 2 person company worth $1.8 billion.

What I Warn My Clients About "Optimizing for AI Answers"

Let’s say leadership walks in on Monday and says, “We need to show up in ChatGPT and AI Overviews. Make it happen.“

Cool. That sentence can either become a sensible experiment, or it can become the seed crystal for a year of slow, expensive SEO debt disguised as “innovation.”

What do we mean by “AI answers,” exactly?

If we can’t define the thing we’re optimizing for, we’re not optimizing. We’re just moving.

“AI answers” is not a single surface. It’s a category of distribution layers with different incentives, different formatting constraints, different citation behavior, and different ways of stealing the click while still giving you the illusion of “visibility.”

So the first trap is letting “show up in AI” become a vibe.

You will feel progress because someone on the team screenshots a mention. You will feel progress because the CEO gets your brand in their personal chat. You will feel progress because a vendor says you’re “well-positioned for the future.”

None of that is a metric. None of that is a plan.

If we’re doing this responsibly, we name three things in plain language.

We name the surfaces we care about. We name the query classes that matter. We name the business outcomes that justify the work.

If we can’t say, “We’re targeting these query types because they lead to these outcomes, and we’ll measure it this way,” we’re not ready to operationalize this. We’re ready to play.

Are we trading durable search equity for a speculative distribution layer?

Here’s the uncomfortable truth: the boring stuff still pays rent.

Crawlability still matters. Internal linking still matters. Canonicals still matter. Content hygiene still matters. Authority still matters.

LLM-driven surfaces may change faster than our quarterly roadmap. UI changes. Eligibility changes. Attribution changes. Citation logic changes. Even the definition of “success” changes depending on whether the platform feels like sending you traffic this week.

That doesn’t mean “AI answers” are a mirage. It means they are not infrastructure yet.

The responsible posture is to treat LLM visibility like an add-on until it proves itself. You fund it like an experiment. You don’t fund it like a rebuild of your content system.

The worst version of this initiative is when we quietly cannibalize fundamentals to feed the shiny thing. The most common way that happens is headcount and attention. People stop doing technical SEO and content maintenance because “we’re doing AI strategy now.”

Then six months later you’re not just uncertain about AI distribution. You’re also worse at search.

What assumption are we taking for granted about how models choose sources?

There is a real “pro” case here, and it’s worth acknowledging.

Structured, consistent, well-cited content can be easier for systems to parse and reuse. If your pages are cleanly organized, definitions are stable, terminology is consistent, and claims are supported, you often become a better reference candidate.

That’s the good version. That’s just “be the best source” dressed in modern clothes.

Now the “con” case.

Models don’t always reward “most correct.” They often reward “most convenient.” They may prefer big brands, aggregators, or pages that are easy to summarize, even when those pages are thin. Citation behavior can be inconsistent, and attribution can be missing or wrong. Even when you are referenced, the link might not appear where a user would click, or the summary might eliminate your differentiator.

So if our plan is basically “make the page easier to excerpt,” we’re playing a fragile game. We might win a citation and lose the category because we trained the system to treat us as interchangeable.

The safer aim is not “be summarized.” The safer aim is “be the strongest reference on the topic.” When we aren’t the strongest reference, “LLM optimization” turns into makeup over substance.

What breaks when this scales?

This is where most teams hurt themselves.

A small, careful, human-reviewed update to a few high-impact pages can be healthy. It can improve clarity, improve scannability, and reduce misinterpretation. It can also help traditional SEO.

Then someone says, “Great, let’s roll it out across 3,000 URLs.”

And suddenly “AI-friendly” becomes a template. Templates become sameness. Sameness becomes a content quality debt that is brutal to unwind.

You start seeing the symptoms.

Pages get repetitive headings because the template demands it. Introductions get replaced with generic definitions because “LLMs like definitions.” Every page grows a forced Q&A block because someone saw a tweet about extraction. Paragraphs become over-structured, not because humans need structure, but because someone is writing for the model’s digestion.

This isn’t just an aesthetic problem. At scale, sameness is a signal. It changes how your site looks to crawlers, how users behave, how internal linking concentrates, and how your content competes with itself.

Guardrails are not optional if we scale. Editorial standards need to be real, enforced, and allowed to block launches. Duplication checks need to be built into the workflow. The “human usefulness” bar needs to be the non-negotiable constraint, even when someone argues that “AI prefers it this way.”

AI is a multiplier of quality or a multiplier of debt. At scale, it’s usually debt.

Are we optimizing for citations, or for the customer journey?

A citation is not a customer journey.

You can win the “answer moment” and still lose the business outcome if the user doesn’t trust you, can’t act, or never reaches your site. In some query types, being summarized reduces clicks. So you can rank “better” and measure less demand capture.

That’s not a reason to avoid LLM surfaces. It’s a reason to segment intent and be honest about tradeoffs.

For informational queries, you may accept lower click-through and optimize for brand recall, mental availability, and downstream retargeting. In plain English, you show up, you sound credible, and you’re easy to remember when the user is ready to buy.

For commercial and decision queries, you do not want to be a trivia source. You want to pull the user into evaluation. That means the experience matters more than the snippet. Tools, comparisons, proof, calculators, demos, “what to do next” clarity. Not just neat summaries.

If the only thing our “AI optimization” does is make the top of the page easier to summarize, we are optimizing the part of the funnel that the platform wants to keep for itself.

What are we willing to give up to be easier to summarize?

This is the trade nobody says out loud.

The more you compress, simplify, and standardize, the easier you are to summarize. But the same moves often erase nuance, caveats, and differentiation.

If you work in a space where nuance is the product, over-summarizing is self-harm. If you work in a regulated space, over-summarizing is risk.

A page that reads like a clean extraction target can be a weak brand artifact. It can also be a liability when someone misuses it.

The healthier approach is to keep the nuance where it matters and make the summary point toward depth, not replace it. Your “quick answer” should be an on-ramp, not a replacement brain.

Who absorbs the risk when the model gets it wrong and attributes it to us?

This is the part most AI SEO playbooks skip because it makes the room less fun.

Models misquote. They compress context until it breaks. They hallucinate. They attribute claims to the wrong source. They confidently restate a conditional as a rule.

When that happens, the platform doesn’t absorb the risk. You do.

Support gets tickets. Sales gets friction. Legal gets nervous. Brand trust takes the hit, especially if the claim is sensitive, outdated, or interpreted as a promise.

So “misinterpretation resilience” becomes part of content strategy.

That means clear definitions where people usually get confused. It means dated statements for volatile topics. It means explicit constraints and “this applies when…” language that survives compression. It means a page-level truth hierarchy where the definitive claims are unambiguous and the contextual nuance is clearly marked as context.

If we can’t explain how we’ll handle misattribution and outdated guidance, we should not be scaling anything.

Are we creating technical debt in the name of “LLM SEO”?

The fastest way to create long-term pain is to bolt new patterns onto your site because they might influence a system you don’t control.

Extra markup that nobody owns becomes orphan infrastructure. Fragmented pages created for “better extraction” become cannibalization. Duplicated Q&A endpoints become crawl inefficiency. Programmatic generation without governance becomes a content swamp.

Every new pattern needs an owner, a maintenance plan, and a rollback path. If we can’t answer “who maintains this when it breaks,” it doesn’t ship.

Tactics that look like LLM optimization but quietly create SEO debt

This is the part I’d want my own team to read twice, because these are the moves that feel productive while planting landmines.

The first tactic is forced Q&A blocks on every page. It looks like you’re making your content “answerable.” In practice, it often duplicates the same questions across hundreds of URLs, creates internal competition, and drags pages toward sameness. Users learn to skip it because it reads like boilerplate. Crawlers learn that your pages are interchangeable.

The second tactic is “LLM summary” sections that repeat the intro in a more robotic format. Teams do this because they think models prefer a neat paragraph of extraction-friendly text. The debt shows up as redundancy, lower engagement, and a creeping incentive to write everything twice. Over time, the summary becomes the real content, and the real content becomes optional. That’s how quality erodes without anyone feeling like they lowered standards.

The third tactic is schema stuffing to “signal authority.” Adding structured data that is technically valid but semantically pointless is a great way to bloat templates and create maintenance overhead. The debt shows up when you need to update it, when it conflicts with on-page content, or when it creates inconsistent entities and naming that you now have to normalize across the entire site.

The fourth tactic is splitting one strong page into multiple thin pages because “AI likes focused pages.” Sometimes focus helps. Often it just creates shallow, overlapping URLs that cannibalize each other, dilute internal links, and make it harder for both humans and crawlers to find the definitive answer. You end up with a maze of near-duplicate pages where the only winner is the platform that summarizes them all anyway.

The fifth tactic is programmatically generating “definition pages” for every feature, every acronym, every edge-case query variant, because it feels like query coverage. The debt is obvious later: thin pages, low trust signals, index bloat, and a content system that becomes impossible to curate. AI didn’t remove the need for editorial judgment. It made the cost of skipping it higher.

None of these are automatically wrong. The point is that they are easy to scale, easy to justify, and hard to unwind. That’s the definition of debt.

What data are we using to decide this is working?

If measurement is “we saw ourselves once,” we are not measuring. We are collecting vibes.

Decide measurement before execution. That means we pick leading indicators and lagging indicators, and we agree on decision points.

Leading indicators can be things like query coverage improvements, eligibility improvements, and sampled citation rates for defined query sets. Lagging indicators are the things that justify the work: assisted conversions, revenue influence, retention lift, support deflection, brand search lift, and improved conversion rates on commercial pages.

If we can’t connect the work to outcomes leadership will still care about in six months, we should not scale it.

Are we incentivizing confident content instead of correct content?

LLM-friendly writing often rewards assertiveness. That can be fine in low-stakes categories. In high-stakes categories, it can be dangerous.

The failure mode is content that sounds clean and complete but quietly drops caveats and turns conditional guidance into universal rules. When a model then compresses that content again, you’re left with a confident lie.

So we need a correctness workflow. Not a vibe. A workflow.

That means SME sign-off thresholds for sensitive topics, versioning discipline, and “last reviewed” practices that are enforced, not aspirational.

What unintended internal consequences are we inviting?

If we’re not careful, this initiative changes jobs in the worst way.

Content teams become prompt operators. SEOs become QA for AI output instead of building durable search systems. Engineering gets pulled into speculative work without a clear ROI.

Protect focus. If the work does not reduce costs, increase demand capture, or materially improve user experience, it is a distraction wearing a strategy costume.

The checklist I’d require before we scale anything

Read this like a pre-flight checklist. If we can’t say “yes” to most of these, we’re not scaling. We’re experimenting, at best.

  1. Name the exact surfaces we’re targeting and the exact query sets that matter, in plain language, without hiding behind “AI visibility.”

  2. Define outcome for each query class, and that the outcome is measurable in a way leadership will still respect later.

  3. Set a content quality bar that resists template spam, and that the bar is enforced by people who are allowed to block launches.

  4. Verify duplication and cannibalization checks in the workflow, because “scaled clarity” and “scaled sameness” look identical until traffic drops.

  5. Set an owner, for every new pattern and maintenance plan, and a rollback path, because orphan infrastructure is how technical debt becomes permanent.

  6. Develop a misattribution and compliance plan, including how we handle outdated guidance, sensitive claims, and unclear boundaries.

  7. Define an experiment design with a fixed time window, defined decision points, and a plan for what we will stop doing if the results are weak.

  8. Mandate that we explain the entire plan without jargon, because if we can’t explain it simply, we do not understand the risk.

If that feels strict, good. SEO is infrastructure. AI is a multiplier. The only question is whether it multiplies the part of your system you’re proud of, or the part you’ve been avoiding.

👤 Operator of Interest: Lily Ray

  • Known for: Tech SEO Legend, International DJ, proud dog mom.

  • Works at: Algorythmic

  • Follow: LinkedIn

Learn This:

Multimodal AI: AI systems that process multiple input types such as text and images.

One more thing: AI is only as good as it’s operator, and if you are reading this newsletter, you are better than most!

Till next time,

Joe Hall

PS: Let me know what you think of this issue, or anything else here: [email protected]

Keep Reading