There are 2 methods of AI GTM data analysis (and one doesn't work IMO):

This post was originally published on LinkedIn.

Dump a load of fragmented data into ChatGPT/Claude/etc and ask for insights

> The AI will try and do its thing, it could find a pattern or two, but it's non-repeatable, it doesn't have a data model it understands, it doesn't have context. The output is fragile, could be wrong, and vague.

Unify your data and build a repeatable GTM AI analysis pipeline

>> You have clean data, a data model the AI is trained on, and a process to run the analysis.

At CS2 we're betting that (2) is the way to go and I am spending a lot of time building this with Claude Code:

START WITH UNIFIED DATA:

We're starting with the unified data model that we build for almost all our clients.

> One object that has all the account, contact, lead, opportunity, campaign, signal, and stage data unified.

CLAUDE CODE PROCESS FOR ANALYSIS:

As you can see in the image, we've used Claude code to build a step-by-step process that does this:

>> CSV import of anonymized data from the unified object from SFDC.

>> We have a config step where we map the client stages, different ICP fields, etc. This feeds the Python scripts in the next step.

>> We have five different analysis types with established (fixed) Python scripts that we are iterating on. This method deterministically produces the same output across all sub-analysis types for each client.

>>  The sub-analysis metrics are fed into the Claude API (alongside a Markdown file that has context and expertise around how to find insights and recommendations in the data) for AI to find the patterns.

>>  This writes back to Markdown files for each subanalysis for a human to review.  The human provides extra context, the top insights that we should focus on, and any additional patterns found.

>>  This is fed back into Claude Code which then creates a HTML page output with charts, insights, and recommendations.

HOW IT'S GOING

I've spent a lot of hours working through different versions of this, testing, removing bugs, etc.  And just last week, we've got the first version working really well and I demoed it end to end for Alison Crissy Claire and Christie.

I hate to be too dramatic, but this really is saving 20+ hours of manual work creating and reviewing reports, finding insights, documenting findings, writing up notes, and building a final report.

BIGGEST LEARNINGS FOR AI DATA ANALYSIS

  1. You need excellent data.
  2. You need to figure out what the best human analysts do and then replicate that process.
  3. You still need an expert human in the loop to find more patterns and add extra context once they see the data.
  4. But overall, the manual work of getting all of this data reviewed and getting from data to insight is enormously compressed.