AI analyzing GTM data has huge potential, but there are 3 failure modes to avoid
This post was originally published on LinkedIn.
(1) Messy and confusing data
This is way harder than it sounds. Most companies are no where near ready to plug AI into their data and expect AI not to produce absolute crap.
(2) No context/data dictionary
AI needs to knew what data it's working with, what fields/objects to use, and what metrics are important.
e.g. most companies have multiple ARR fields, multiple fields for Account type, etc. If AI isn’t told which fields to use, it may pick the wrong one, and your "insights" are completely wrong.
(3) No project/skill or upfront config for repeatable analysis
Most people just throw data into chat interface AI and then ask it questions then move on. Then next time do it again in a new chat session.
The issue here is that every time the analysis will be run differently.
This is the difference between using Claude Chat vs. Claude Code.
In Claude Code you can build a project, skill, reusable python scripts, .md instructions fed at different times, to do the analysis and use the same process each time.
Claude Chat is like giving an inexperienced, rushed intern the task vs. a project built in Claude Code is like giving the task to an experienced data analyst that will take their time.
---------------------------------------------------------------------------
The reason I know the 3 failure modes is because I have had to work around them while building our AI data analyst.
I need to record a video on this soon but here is what it does:
- Start with unified, clean pipeline data (solves failure mode 1)
- Work in a Claude Code project with context and understanding of the data model (solves failure mode 2)
- Run a config session in Claude code to map stages, fields, validate data, etc (solves failure mode 3)
- Run reusable py scripts (built by Claude Code) to do the analysis (solves failure mode 3)
- Results are reviewed by Claude for top 15 findings (Claude is fed .md guidelines on how to approach each analysis type)
- Human reviews and gives feedback and extra context
- HTML report with findings and recommendations built by Claude Code to review with client (Claude is fed .md file with report building guidelines)
I've been talking about the potential of this for a while but it's actually working now and we're delivering the output to clients, helping them get insight and make decisions, and informing the roadmap.
