01

Specialised organisations need a different kind of AI strategy

If you compete on deep expertise, customer trust, or regulatory credibility — not just operational efficiency — then generic AI advice can be quite risky. It tends to treat every organisation as a cost problem to optimise.

The organisations I work with compete on what they know, who trusts them, and what they're licensed to do. AI can amplify all of that. But deployed carelessly, it commoditises the very thing clients pay a premium for.

I think of these as four things worth protecting — Liability, Data, Trust, and Regulatory. The question isn't whether to adopt AI. It's how to adopt it without eroding what makes you valuable.

02

Most leaders don't need more AI knowledge. They need the confidence to act.

The executives I work with aren't confused about what AI is. They're uncertain about what to do about it — what's worth pursuing, what to ignore, and how to make those calls with conviction when the landscape shifts every quarter.

That's the real job. Not explaining transformers. It's helping leaders move from "we should probably do something about AI" to "here's exactly what we're doing and why" — and making sure they can keep making those decisions long after I've left the room.

I spend most of my time translating uncertainty into structured choices. A new model release, a competitor's announcement, a board question — these are decision moments, not technology moments. Getting good at them is what separates strategy from reaction.

03

AI adoption is a people problem, not a technology problem

"What does this mean for my role?" is the question nobody asks in the all-hands meeting but everyone's thinking about on the drive home. The resistance that follows isn't irrational — it's entirely predictable.

I keep seeing the same pattern: organisations invest in the technology, skip the human conversation, then wonder why nobody uses the system they built. The best implementations start by creating clarity about what AI changes and what it doesn't — honestly, not with corporate platitudes.

In most organisations I work with, people are already using AI — they're just not telling anyone. There's no sanctioned way to experiment, so they do it quietly. The fix isn't usually a policy document. It's creating the conditions where experimentation becomes visible and legitimate.

Don't wait for perfect. Design for imperfection.

The organisations getting value from AI right now aren't the ones with the best technology. They're the ones that learned to work with imperfect systems — shipping at 92% accuracy because they've designed the verification to catch the rest.

Waiting for AI to be perfect tends to mean waiting forever. The strategic question is: what level of imperfection can you design around? The organisations that get good at checking AI outputs — quickly, systematically — are building real capability while everyone else debates readiness.

04

Build capability, not dependency

My goal with every engagement is to make myself unnecessary. I build the system — the frameworks, the governance, the training — and you run with it. Six months later, you should be making excellent AI decisions on your own.

This isn't altruism. Organisations that become genuinely capable tell other organisations. And those referrals arrive pre-qualified — they already know how this works, and they want the same thing.

I don't separate AI strategy from experience design, from leadership development, from organisational change. They're the same conversation — or should be. The best AI decisions happen when someone is asking does this strengthen our position, can we actually build it, and will our people make it work all at the same time.

If any of this sounds familiar

These ideas come from working with organisations navigating the same questions you're probably facing. If you'd like to think through what AI strategy looks like for your specific situation, I'm happy to have that conversation.

Start a conversation

No pitch. Just a useful first discussion.

— David