📖 In This Issue

  • SEO For LLMs (GEO/AEO) Checklist

  • Featured Snippets: (News & Resources)

  • Cover Story: Why Your AI Content Fails, And How To Fix It

  • Operator of Interest: Name here

  • Learn This: Semantic Similarity

SEO For LLMs (GEO/AEO) Checklist

I just finished writing a comprehensive SEO For LLMs (GEO/AEO) checklist that I can’t wait for you to see! But first I need to ask a small favor: Please share this newsletter with your friends, and I’ll email you this checklist for free as a thank you gift!

  1. Ask your friends to subscribe with this URL: {{rp_refer_url}}

  2. If just one of them subscribes, I’ll email you the free checklist.

📰 Featured Snippets (News & Resources)

A new study shows that 79% of marketers and 77% of creators agree that AI delivers high-quality product recommendations, but only 35% of consumers agree.

Google AI Mode now works in 53 new languages. But the real question is, how well does it work in the 53 new locations?

Kevin Indig studied 1.2 million search results to learn how AI reads and selects content in the SERPs.

SEO and entrepreneur Nick Eubanks is exploring risk governance for enterprise Agentic AI at his new site RiskGovernance.com

AI-Generated Content Fails When the Information Architecture Is Wrong

If the copy “looks good,” why does performance get worse after we add AI content?

Because AI often improves fluency, not fit. It makes sentences smoother. It makes pages feel finished. But when your information architecture is wrong, AI doesn’t correct the underlying model. It scales the wrong model faster. You get more pages, more overlap, more ambiguity, and a site that becomes harder for search engines and humans to understand at the same time.

Here’s the systems claim that matters more than any prompt: content quality isn’t just the words on the page. It’s the system the page lives in.

“Information architecture” sounds like a big, abstract thing, but it’s just four very practical decisions you make, whether you admit it or not. It’s how you group and label concepts so the site has a consistent taxonomy. It’s page purpose, meaning what each page is actually for and what it should be the best destination for. It’s URL strategy, meaning what deserves its own URL versus what should live as a section, a filter, or not exist at all. And it’s internal linking and navigation, which is how you communicate priority and relationships so both users and crawlers can infer hierarchy.

When IA is solid, AI can be a multiplier. When IA is shaky, AI becomes a high-speed printer in a room where nobody agreed on the filing system.

Broken taxonomy is where “coverage” quietly turns into noise.

The symptom pattern is predictable. Categories are inconsistent, so the same concept lives in three places with three names. Tags behave like categories, which creates parallel universes of “almost the same” grouping. Templates spin up dozens of near-identical pages because the site is structured to create URLs, not meaning.

In that environment, AI does what it’s best at. It produces pages that sound complete. It fills sections. It adds headings. It writes “definitive guides” that look like they belong. The problem is that they don’t map cleanly to a coherent topic set because your site doesn’t have one. AI doesn’t resolve conceptual confusion; it replicates it at scale. The output reads fine, but the site’s taxonomy stops behaving like a catalog and starts behaving like a junk drawer with better grammar.

Unclear page purpose is where relevance becomes unprovable.

A page can be well-written and still fail if it can’t answer a blunt question: what query family is this the best destination for?

Most AI-assisted content programs collapse here because the site doesn’t enforce roles. “Guide” pages are secretly product pages. Category pages read like blogs. Comparison pages don’t commit to a comparison set, so they hedge and generalize. The copy looks good, but the intent is blended.

AI’s failure mode matches the prompt you give it and the mess you give it to learn from. If your brief doesn’t lock purpose, AI will blend intents because that’s the safest, most fluent thing it can do. It creates pages that are broadly relevant and specifically uncompetitive. You don’t get “the page” for a job to be done. You get “a page” that could be about several jobs, which means it’s the best destination for none of them.

Competing URLs are where AI turns your site into a cannibalization factory.

Once you can generate at speed, the most common pattern is “one topic, many pages.” Multiple posts target the same concept from slightly different angles. Multiple categories and “solutions” pages overlap because nobody picked winners. Location and service permutations multiply even when the underlying content is not truly distinct. And the classic: “Updated for 2026” goes live without retiring 2025.

At small scale, this feels like breadth. At scale, it’s signal dilution. Internal links split authority across multiple candidates. Crawling and indexing resources get spent on low-value variants. Rankings get unstable because the site can’t consistently present a canonical answer. Even Google has explicitly called out that many low-value-add URLs, including duplicate content and faceted navigation patterns, can negatively affect crawling and indexing.

This is why teams get confused. They ship more pages and see temporary movement, then volatility, then a slow “why are we weaker now?” conversation. The content didn’t get worse. The system got noisier.

Internal linking is where it becomes performative instead of directional.

With healthy IA, internal links clarify hierarchy. They teach the crawler what matters, what supports what, and what the main destinations are. With poor IA, internal links just add more paths through a maze.

AI accelerates this mistake because it’s easy to auto-generate “related articles,” crosslinks, breadcrumbs, and in-copy links. You end up amplifying loops and redundancy. You don’t get a stronger structure; you get more interconnected confusion. Google has been blunt for a long time that link architecture is a crucial part of getting pages discovered and understood. Internal linking is not decoration. It’s infrastructure.

Now zoom out to the real cost, because this is where the stakes show up for in-house teams.

Search risk looks like diluted signals, crawl waste, inconsistent indexing, and keyword cannibalization. If you’ve ever watched Google swap URLs in and out for the same query week over week, that’s often your site telling on itself. Multiple pages are competing for the same job. Some SEO platforms and guides describe cannibalization as search engines struggling to choose the most relevant page when several target the same or very similar terms and intent.

Conversion risk shows up as users landing on “a page,” not the page. They have to do extra work to confirm they’re in the right place. Trust drops. Pogo-sticking rises. Your site feels less like a library and more like a stack of pamphlets.

Org risk is the quiet one. Teams interpret AI output as a quality upgrade because the writing is smoother. Meanwhile the system degrades. You make decisions based on surface-level “content quality” reviews while the underlying architecture accumulates debt.

Strategic risk is what happens when you confuse publishing with building. You end up “producing content” instead of creating a durable knowledge structure that compounds.

To be fair, there is a case for AI content, and it’s not a small one.

AI can work extremely well when your taxonomy is stable, page roles are defined, URL rules are enforced, and internal linking reflects hierarchy. In that world, AI’s best role is not deciding what should exist. It’s filling known gaps inside a structure you already trust. It drafts variants after intent and page purpose are locked. It helps scale documentation, explanations, and supporting content where the “what is this page?” question has already been answered by the system.

There’s also a strong case against AI-first publishing.

If your IA is unsettled, publishing more is like adding books to a library with no catalog. The first win looks great because you shipped 50 pages. The second-order effects show up later when rankings wobble, index coverage gets weird, and everyone asks why “good content” isn’t performing. This is exactly the kind of systems-under-load failure that programmatic and templated content sites run into when uniqueness is defined by URL parameters instead of meaning; plenty of public postmortems of programmatic SEO experiments describe a pattern of early promise followed by underperformance or drops when pages become too similar and the site’s structure doesn’t create clear winners.

If you want a practical diagnostic you can run without a replatform, do an IA readiness quick test on one directory.

Ask yourself, for each important URL in that section, whether you can label it with a single primary purpose without hedging. Ask whether there is exactly one best URL for each core topic, not three “kind of” answers. Ask whether your navigation reflects how users actually think about the space, not how your org chart thinks about it. Ask whether your templates create unique value or merely unique URLs. And ask the most revealing question: if you removed the copy, would the structure still make sense?

If you fail that test, fix the system before you scale output.

Fix the system first: a remediation sequence that actually holds up

If you want AI content to compound instead of clog, do this in order:

Step 1: Map topic → intent → best URL (pick winners)
Decide what page gets to be “the answer” for each core query family.

Step 2: Clean taxonomy (merge, rename, remove junk drawers)
Kill the buckets that exist because “we needed somewhere to put it.”

Step 3: Define page types and rules (what deserves a URL)
This is where most AI programs fail. They generate pages before they define “a page.”

Step 4: Rebuild internal linking to express hierarchy
Links are not decoration. They’re instructions. (Make them crawlable and meaningful.)

Step 5: Then apply AI to expand within the clarified model
Now AI scales clarity, not confusion.

The takeaway is simple and not negotiable: AI doesn’t fix IA. It makes the consequences arrive sooner.

If you want something concrete to do next week, don’t start by generating more pages. Start by running the readiness test on one section of the site. Identify three competing URLs that are trying to rank for the same job and choose a single canonical destination. Then write a one-page URL policy that your team can actually enforce before you turn the content machine back on.

👤 Operator of Interest: Krishna Madhavan

  • Known for: Principal Product Manager @ Microsoft AI.

  • Works at: Microsoft

  • Follow: LinkedIn

Learn This:

Semantic Similarity: A measure of how closely related two pieces of text are in meaning. Learn More

One more thing: AI is only as good as it’s operator, and if you are reading this newsletter, you are better than most!

Till next time,

Joe Hall

PS: Let me know what you think of this issue, or anything else here: [email protected]

Keep Reading