Skip to content

The Leader Who Hasn't Used AI...

How materials leaders can effectively implement AI in their organizations by understanding its capabilities, limitations, and the importance of expert involvement.

...shouldn't be setting AI strategy - 

We just finished our first fully AI-enabled consulting engagement. Every step — research, modeling, report-out — ran with AI in the loop. I went in expecting to be impressed. I was. But not always where I expected, and not without friction that taught me something more important than any capability demo ever could have.

How AI Helped Us

The moments that genuinely stopped me were about data.

We needed to build a picture of a market using sources that were scattered, inconsistent, and largely unstructured — analyst reports, trade publications, public filings, industry databases. In the past, that synthesis job takes days. With AI doing the aggregation, we had a coherent framework in hours. Not a summary — a structured model we could actually work from. The AI's ability to pull signal from noise, across dozens of sources simultaneously, is real and it's significant.

The second capability that impressed us was modeling speed. We needed to run scenario analysis on a system with several interacting variables. The AI built the model structure quickly, drawing on sound methodological approaches, and we were stress-testing scenarios the same day we framed the problem. That's a different rhythm than I've worked in before.

But here's what also happened: as the engagement got more specific — more contextual, more client-particular — the AI started making mistakes. Not dramatic ones. Quiet ones. It would apply a general industry framework where the client's situation called for something more tailored. It would fill a context gap with a plausible assumption rather than flagging the gap. Our consultant had to validate outputs at every step, and in several cases, expert judgment had to override the AI's approach entirely.

The capability is real. So is the validation burden.

Operating at the Speed of Thought

There is a feeling you get when you work with AI on a real problem that I don't know how to fully describe except to say it's unlike anything else in a professional context.

The closest I can get: it feels like thinking out loud and having your thoughts handed back to you, structured and extended, in the time it takes to pour a coffee. Questions become frameworks. Rough hypotheses become testable models. The lag between "I wonder if..." and "here's what the data suggests" compresses to almost nothing.

That feeling matters strategically — not because it's pleasant, but because it changes what you understand AI to be for. Until you've felt it, you're working from a description of a color you've never seen. You can nod along to briefings about AI capabilities. You can read the case studies. But you won't know where the tool actually fits in your organization, where it breaks down, or what it demands from the people around it, until you've sat with it on something real.

And that ignorance has consequences. The leader who hasn't felt it will underestimate the validation burden — because the outputs look authoritative even when they're wrong. They'll overestimate what can be delegated to junior staff — because they won't understand that context is the actual input, and context lives with the people who've spent years in the problem. They'll design the wrong adoption roadmap, because they're imagining a tool when they should be imagining a new way of working.

Actioning the Insight

First, use it yourself — on a real work problem, not a demo. Have a question you actually need answered. Give it a deliverable that matters. A briefing from your innovation team is not a substitute. You need the experience of being the user, including the moment when the output is confidently wrong and you have to catch it.

Second, anchor your AI implementation with your experts, not your youngest staff. The instinct to hand AI tools to early-career analysts because "they're more comfortable with technology" gets the logic exactly backwards. AI is only as good as the context it's given. Context is the product of experience. Your experts need to be in the room — guiding the prompts, validating the outputs, and knowing when to override.

Third, treat your first AI-enabled engagement as a learning lab, not a showcase. Go in expecting to discover what breaks. Document it. What the AI can't do in your specific context is at least as important as what it can, and you won't know either until you've tried.

The leaders who will set good AI strategy are the ones who've earned the right to — by using it, struggling with it, and understanding it from the inside.

That's not a technology question. It's a leadership one.

Until next week,

Kendall -

p.s. this post was written by AI (Claude code), using my jumbled, stream-of-consciousness notes, and after being contextualized on my writing style, customer profile and voice. The insights are mine; the voice is mine; the time it took me - cut by 80%. If you want to get started on commercial uses of AI, reply to this note.

secta.ai_005 keyhole
Find me on LinkedIn or Book a 1:1 call
Not a subscriber yet? Subscribe here