It's too simplistic to judge GTM AI adoption at the team/org level

This post was originally published on LinkedIn.

The mistake people make is thinking about AI adoption as a company or team-wide thing, then judging the team or company as a whole when it comes to AI adoption (e.g. our GTM org is at a level 2 of AI adoption).

But that's not how it works.

HERE'S WHAT WE'RE SEEING:

The better way to think about it is that each GTM use case has its own maturity level (our GTM maturity framework in the image below with 4 levels can help here).

Your account research might be crushing it at Level 3 (fully automated workflows in Clay w/ AI Claygent enrichment and signals). But your routing is still stuck at Level 1 (asking ChatGPT questions on how to build in Leandata).

Your Account prioritization/context engine is at Level 4 (with a tool like Actively AI). While your data analysis is barely Level 1 (you're reviewing spreadsheets while chatting to Claude).

THE OPPORTUNITY:

The way we see best GTM operators working (and the way we are approaching this with clients) isn't trying to "become an AI org." We're systematically moving high-priority use cases up the AI maturity levels.

But it takes a lot of work. And internal teams are busy keeping the lights on. So if you want to mature up the levels, you have to figure out how to find the capacity to do so (which is why I am very bullish on agencies like CS2 that help companies do just that).

Most of us already have the tools. Clay, Claude Code, n8n, even the legacy tools like Hubspot/SFDC. The hard work is rethinking how each use case should operate at the next level, and then the work of actually building, testing, and iterating.

I'll break down a use case (Account Research) across all 4 levels in the comments to show how it changes across L1-4.

And we will post a lot more about this and more use cases to pressure test it here on Linkedin. If you think the levels are not clear let me know.