We've spent a long time thinking about what it means to use AI well. Not just the tools and the techniques, but the harder questions underneath: what AI is actually good at, where human judgment is irreplaceable, and what responsible adoption looks like in practice rather than in principle.

These are the commitments that shape how we work — internally, and in the advice we give to clients.

1.

AI enhances human capability — it doesn't replace human judgment

The organisations getting lasting value from AI are the ones who have designed human verification into the workflow from the start — not bolted it on afterwards.

We believe AI is most powerful when it amplifies what people do well: recognising patterns across large amounts of information, generating options worth considering, handling the repetitive so people can focus on the consequential. But the decisions that matter — the ones with real consequences for real people — require human judgment, context, and accountability.

In our own work, every AI-assisted output is reviewed, validated, and owned by a human. We don't hand over final judgment to a model. In our advisory work, we help organisations design the same approach: AI systems with humans genuinely in the loop, not as a rubber stamp but as an active participant.

This isn't caution for its own sake. It's a practical recognition that probabilistic systems produce probabilistic outputs. The organisations getting lasting value from AI are the ones who have designed human verification into the workflow from the start — not bolted it on afterwards.

2.

Ethics is not a constraint on AI — it's part of what makes it work

The context changes; the commitment doesn't.

We take the ethics of AI seriously, not because it's required of us, but because we've seen what happens when it isn't. Bias embedded in training data. Outputs that feel authoritative but are factually wrong. Automation that removes the human accountability that customers and regulators depend on.

Ethical AI, in our view, means being honest about what a system can and can't do. It means designing for fairness and inclusion, not assuming neutrality. It means being transparent with clients, students, and partners about when AI is involved in producing work. And it means not recommending tools or approaches we wouldn't be comfortable operating under ourselves.

We actively bring ethics into every engagement — not as a separate workstream, but as a lens on the work itself. In financial services work, this has meant defining clear boundaries around what AI can recommend and what requires human sign-off. In civic and public sector work, it has meant exploring how AI can serve inclusion and accountability rather than undermine it. The context changes; the commitment doesn't.

3.

We believe in human-centred AI — designed around how people actually work

The organisations that get lasting value from AI are rarely the ones who deployed the most tools.

The organisations that get lasting value from AI are rarely the ones who deployed the most tools. They are the ones who designed AI to fit how their people actually think, decide, and take responsibility. That is a different starting point — and it leads to very different outcomes.

Human-centred AI means starting with the human workflow, not the model capability. It means asking what decisions people need to make, what information they need to make them well, and where AI genuinely helps versus where it introduces noise or erodes the judgment it was supposed to support. It means designing for the person using the system, not just the system itself.

It also means being honest about impact. AI changes what work looks like — for individuals, for teams, and for the people on the receiving end of AI-informed decisions. We think those are conversations worth having directly, not managing around. The organisations furthest along are the ones that had real conversations about what AI means for their people, early on, and designed with that in mind. We help facilitate those conversations — and then build the systems that follow from them.

4.

The goal is capability — not dependency

The best outcome is an organisation that genuinely doesn't need us anymore.

We think about success differently to most consultants. A successful engagement, for us, isn't one where the client needs us again next quarter. It's one where the client's own people can evaluate, govern, and deploy AI with confidence — without waiting for external help every time.

This means we build frameworks people can use independently. It means we teach rather than just advise. It means we're explicit about our reasoning, so clients can apply the thinking to the next challenge themselves. The best outcome is an organisation that genuinely doesn't need us anymore — because we've helped them build what they needed internally.

It's a longer-term bet, and it creates a different kind of relationship. The organisations that come back to us do so because they trust the thinking, not because they're dependent on it.

5.

We are transparent about how we use AI — including in our own work

It would be inconsistent for a Strategic AI Advisory to be otherwise.

We use AI tools, including Claude (Anthropic), in our research, analysis, content development, and advisory preparation. We think it is important to be open about this — and it would be inconsistent for a Strategic AI Advisory to be otherwise.

What we are clear about is the boundary: AI assists our thinking; it doesn't replace it. Client-sensitive information is handled in accordance with our Information Security and AI Acceptable Use policies. All outputs that reach clients have passed through human review and professional judgment. We take full responsibility for the quality and accuracy of our work, regardless of the tools involved in producing it.

We bring the same expectation of transparency to client work. When AI is involved in a process or output, we say so. We don't think that transparency diminishes the work — we think it's part of what makes it trustworthy.

These principles aren't a destination. AI is moving fast, and our thinking moves with it. What stays constant is the underlying commitment: to use AI in ways that are honest, human-centred, and worth trusting.

David Kolb
Director, David Kolb Consultancy Ltd
March 2026
↓ Download as PDF