Strategic AI Advisory
I work with specialised organisations to turn scattered AI experiments into something more systematic — helping build the internal capability to verify and ship with confidence.
Usually a 30-minute chat. No decks required.
I design and lead AI programmes at Northwestern Kellogg and Imperial College Business School, deliver guest sessions at MIT xPRO, and am part of the teaching team at IDEO U. The teaching and the advisory feed each other: what I learn from clients shapes what I teach, and what I develop for the classroom sharpens how I advise.
Industries
In my experience, most organisations aren't short of AI pilots. What they're often missing is a reliable path to trusted deployment.
1
Teams can build impressive demos — that part's not the problem. The harder question is whether you can trust those outputs enough to put them in front of a customer or a regulator. That's often where things stall.
2
The organisations that actually scale tend to be the ones designing governance to say "yes, safely" — clear guardrails that enable speed, rather than committees that slow everything down.
3
The goal isn't really a successful project. It's building an organisation where your own people can evaluate, govern, and ship — without waiting for external help every time.
The work isn't usually about adding more pilots. It's more about changing the ratio of ideas to things that actually get deployed.
From Ideas to Deployments
A science-led organisation with strong AI momentum across multiple countries. We helped the executive team turn that energy into a coherent strategy, connecting AI adoption to what makes the business valuable: expertise, rigour, and client trust.
Food safety & life sciences · EuropeFrom Licences to Value
AI tools had been rolled out company-wide but leadership wanted clearer results. I ran executive sessions focused on turning experimentation into real workflows, real decisions, real value.
Events & information · GlobalFrom Drafts to Decisions
Field engineers produce safety-critical reports under strict regulatory frameworks. We delivered an AI literacy programme and are building workflow automation that cuts documentation time while keeping professional accountability intact.
Water & environmental · UKFrom Shadow AI to Safe Harbour
People were already using AI tools — they just needed a sanctioned way to do it. I'm helping leadership gain visibility over experimentation and build a lightweight framework that scales what's already working.
Healthcare · UKFrom Skills to Practice
Brought in to equip the design team with practical AI skills for research, ideation, and prototyping. We built structured prompts, embedded ethical watch-outs, and a real design challenge. The team ran follow-up sessions internally.
Design & fintech · UKFrom Theory to Judgment
I delivered the AI sessions for a cross-community Fellowship at its Oxford residency — scenario-based work designed for people whose currency is accountability, relationships, and public trust.
Peacebuilding & civic leadership · Northern IrelandMostly mid-market and enterprise organisations in specialised B2B sectors — scientific services, healthcare infrastructure, water treatment, events, fintech, and other trust-dependent industries. I also work with civic and public sector programmes where AI judgment matters as much as AI capability.
They're not asking whether to use AI — they're asking how to scale it safely without losing what makes them valuable.
Long-memory, safety-critical operators who need to keep humans in the loop by design. Water treatment, lab testing, medical services — that kind of thing.
Firms whose value rests on expert interpretation and invisible reliability. Where AI needs to enhance judgment, not replace it.
Organisations using AI to amplify human connection in complex experiences — events, media, professional services.
If you're past "should we do AI?" and thinking about how to actually make this work — I'd be happy to have a conversation.
Start a conversation— David