📖 In This Issue

  • Featured Snippets: (News & Resources)

  • Cover Story: My 3-Step AI Workflow for SEO Data Analysis

  • Operator of Interest: Noah Learner

  • Learn This: Multimodal AI

📰 Featured Snippets (News & Resources)

Google plans to invest $40 billion into Claude’s maker Anthropic. It’s surprising to see such a large investment into a direct competitor of Gemini.

Chris Long did a very interesting analysis on OpenAI’s bot crawl behavior, and a few other things over at the Botify blog.

Google’s Liz Reid claims AI Overviews are removing “bounce clicks“ but still sending users seeking deeper information. Reid has been peddling this nonsense for the better part of a year, assuming those with 1st party data can’t read a basic spreadsheet.

Danny Sullivan tells us his answer to AI search is what he calls “non-commodity content”. As usual, a new name for vague advice that can’t really be measured or implemented with clarity.

My 3-Step AI Workflow for SEO Data Analysis

If you’ve spent any time on LinkedIn lately, you’ve probably seen the claim: “Just upload your spreadsheet to an LLM and ask for the analysis.”

It sounds efficient. It also tends to produce the kind of analysis that falls apart the second someone asks, “How did you calculate that?” or “Does this still hold if we change the date range?”

Using AI to help with data analysis is smarter than having AI do all the work alone. AI is a multiplier of quality or debt. If you feed it messy definitions and unclear methods, it will happily multiply the mess.

Here’s the three-step method I use when the goal is accurate analysis that stakeholders actually trust.

Step 1) Ask the LLM to plan the analysis structure and design

Before you write your first prompt, get aligned on the job the analysis is supposed to do.

I start by defining the main goal in plain language. Then I write down the specific questions that must be answered to reach that goal. Finally, I list what data sources exist and what they can and cannot tell us.

Then I ask an LLM to produce a one-page “analysis design doc” that includes the structure of the work, the KPIs and metrics we’ll use, and the methodology for how we’ll interpret results.

Example prompt:

Help me design a data analysis project based on the below project details. Define the required methodologies, processes, KPIs, and metrics to fully reach the main goal, and answer the main questions. Formalize the plan into an internal document that can be referenced later during the analysis, and after when reporting to stakeholders.

# Project details
## Main Goal: 
- To improve search engine visibility with better internal linking.
## Main Questions:
- Do internal links currently help with search engine visibility?
- Are there any areas of optimization we can do for our current internal links?
## Data sources:
- Screaming Frog Crawl Data
- Google Search Console Performance Report Data

This is where AI shines: it’s great at helping you turn a fuzzy request like “tell me what’s happening with organic traffic” into a clean plan that separates diagnostic questions (what changed?) from causal questions (why did it change?) and decision questions (what should we do next?).

The output isn’t the analysis. It’s the blueprint. And it’s the thing that prevents you from spending days calculating the wrong answers perfectly.

Step 2) Run the analysis (without defaulting to AI)

This is where a lot of people get lazy.

Less sophisticated methods are often better here because they’re easier to test, easier to explain, and easier to defend. When stakeholders challenge a conclusion, you want to be able to trace it back to a clear definition and a reproducible method.

LLMs can quietly make mistakes, especially when methods are not clearly defined or the model can’t reliably “hold” the full dataset context at once. Hallucinations are a known, ongoing reliability problem in LLMs, even as models improve, which is why treating a fluent narrative as “analysis” is risky.

So instead of asking an LLM to “analyze my data,” I use AI to aid analysis based on the size and shape of the work.

For smaller datasets, I’ll have an LLM help me write tighter spreadsheet formulas, sanity-check logic, or translate a messy manual process into something repeatable. If you’re using Excel, Microsoft is explicitly pushing this direction with Copilot features that help users create and understand formulas and even bring AI into the grid.

For medium-sized datasets, I lean on AI to speed up SQL. This is the sweet spot where the database is doing the real work, and AI helps you get to the right query faster, reduce syntax mistakes, and produce cleaner outputs you can rerun next month.

Example prompt:

Give me a SQL command that returns the average of 'InLinks' where URL contains '/product/' in the 'pages' table.

For big datasets, I use AI as a coding assistant. Not to “decide” what’s true, but to write Python scripts that parse logs, aggregate by page type, compute deltas, and output tables that match the questions in the design doc. The math stays inspectable. The workflow stays reproducible.

The rule I follow is simple: if I can’t explain the method in plain language, I don’t ship the conclusion.

Step 3) Use AI to package it all up into a report

Once I have real outputs from Step 2, I go back to the one-page design doc from Step 1.

Then I ask the LLM to write the stakeholder-facing report, using the original questions as the skeleton and the results as the evidence.

This is where you should be explicit about the audience. A report for a VP of Marketing should emphasize decisions, risks, and next steps. A report for a product team might emphasize query intent shifts, templates, indexing behavior, and what changed in the funnel. If you’re using AI to summarize, you still want the model to reflect your definitions and constraints.

Example prompt:

Please write a report summarizing the data analysis findings found inside AnalysisFindings.csv. Make sure the report is structured to answer the main questions and and goals found in AnalysisDesign.pdf. The intended audience for this report is the VP of Digital Marketing.

Remember this is just a first draft that you should refine and edit by hand on your own. When the report reads cleanly and the math is defensible, you get the best of both worlds: speed without sacrificing trust.

Bonus Tip: Start a new thread for each step.

You should start a new thread for each of the above steps to refresh the context window. That way, for example, your request for a SQL command, or Python script, from Step 2, won’t influence the final report that you write in Step 3 (which should not include that level of detail).

~

The viral “upload a spreadsheet and ask for insights” workflow optimizes for attention, not outcomes.

If you want analysis that holds up under pressure, treat AI like an assistant to your process, not a substitute for having one.

Plan with AI. Compute with reliable methods. Write with AI. Defend with clarity.

If you try this three-step approach on your next data analysis, you’ll feel the difference the moment someone asks, “What changed, and how do we know?”

👤 Operator of Interest: Noah Learner

Learn This:

Multimodal AI: AI systems that process multiple input types such as text and images. Learn More

One more thing: AI is only as good as it’s operator, and if you are reading this newsletter, you are better than most!

Till next time,

Joe Hall

PS: Let me know what you think of this issue, or anything else here: [email protected]

Keep Reading