Strategic AI Advisory

Most AI pilots don't make it to production. I've spent a lot of time thinking about why.

I work with specialised organisations to turn scattered AI experiments into something more systematic — helping build the internal capability to verify and ship with confidence.

Let's have a conversation See how I think

Usually a 30-minute chat. No decks required.

I also teach this — to the people making the decisions

I design and lead AI programmes at Northwestern Kellogg and Imperial College Business School, deliver guest sessions at MIT xPRO, and am part of the teaching team at IDEO U. The teaching and the advisory feed each other: what I learn from clients shapes what I teach, and what I develop for the classroom sharpens how I advise.

Northwestern Kellogg AI Strategies for Business Transformation
Imperial College Business School AI for Business Transformation: Generative AI, Agentic AI and Beyond
MIT xPRO AI Strategy and Leadership Program
IDEO U AI x Design Thinking Certificate

Industries

Financial services Scientific & healthcare services Water & environmental services Public sector & civil society Events & information Design & fintech

What I've learnt from the post-pilot reality

In my experience, most organisations aren't short of AI pilots. What they're often missing is a reliable path to trusted deployment.

1

The bottleneck is usually verification

Teams can build impressive demos — that part's not the problem. The harder question is whether you can trust those outputs enough to put them in front of a customer or a regulator. That's often where things stall.

2

Good governance should accelerate, not block

The organisations that actually scale tend to be the ones designing governance to say "yes, safely" — clear guardrails that enable speed, rather than committees that slow everything down.

3

I think capability matters more than dependency

The goal isn't really a successful project. It's building an organisation where your own people can evaluate, govern, and ship — without waiting for external help every time.

How this tends to play out

The work isn't usually about adding more pilots. It's more about changing the ratio of ideas to things that actually get deployed.

From Ideas to Deployments

Global Testing & Certification Company

A science-led organisation with strong AI momentum across multiple countries. We helped the executive team turn that energy into a coherent strategy, connecting AI adoption to what makes the business valuable: expertise, rigour, and client trust.

Food safety & life sciences · Europe

From Licences to Value

Global Information Business

AI tools had been rolled out company-wide but leadership wanted clearer results. I ran executive sessions focused on turning experimentation into real workflows, real decisions, real value.

Events & information · Global

From Drafts to Decisions

Regulated Technical Services

Field engineers produce safety-critical reports under strict regulatory frameworks. We delivered an AI literacy programme and are building workflow automation that cuts documentation time while keeping professional accountability intact.

Water & environmental · UK

From Shadow AI to Safe Harbour

Healthcare Infrastructure

People were already using AI tools — they just needed a sanctioned way to do it. I'm helping leadership gain visibility over experimentation and build a lightweight framework that scales what's already working.

Healthcare · UK

From Skills to Practice

Global Fintech

Brought in to equip the design team with practical AI skills for research, ideation, and prototyping. We built structured prompts, embedded ethical watch-outs, and a real design challenge. The team ran follow-up sessions internally.

Design & fintech · UK

From Theory to Judgment

Civic Leadership Programme

I delivered the AI sessions for a cross-community Fellowship at its Oxford residency — scenario-based work designed for people whose currency is accountability, relationships, and public trust.

Peacebuilding & civic leadership · Northern Ireland

Who I tend to work with

Mostly mid-market and enterprise organisations in specialised B2B sectors — scientific services, healthcare infrastructure, water treatment, events, fintech, and other trust-dependent industries. I also work with civic and public sector programmes where AI judgment matters as much as AI capability.

They're not asking whether to use AI — they're asking how to scale it safely without losing what makes them valuable.

Regulated Technicians

Long-memory, safety-critical operators who need to keep humans in the loop by design. Water treatment, lab testing, medical services — that kind of thing.

Trust-Dependent Professionals

Firms whose value rests on expert interpretation and invisible reliability. Where AI needs to enhance judgment, not replace it.

Experience Architects

Organisations using AI to amplify human connection in complex experiences — events, media, professional services.

If this resonates with where you are

If you're past "should we do AI?" and thinking about how to actually make this work — I'd be happy to have a conversation.

Start a conversation

— David