At the time of writing, I have led AI efforts in 4 different companies across sectors and company size (from 1,300 people to smaller 100 people series A). I have also built myself applied AI models in 6 more companies, again across sectors and company size (from an IPO status Deliveroo, to enterprises like Shell and Unilever).

Over this time, I have seen many initiatives succeed, some succeeding amazingly, and some failing silently.

I thought about this for a while, and I wanted to try and distil what I think are some key enablers to leverage AI and some possible practical steps that can follow from them.

Three enablers

1. Having clear how AI can add value, starting from leadership

The way I think about it, AI can help in three ways:

1.    Reduce costs (e.g. automate repetitive tasks) - easiest to work out, but added value is capped

2.    Improve outcomes (e.g. better predictions, fewer errors) - more difficult to work out, added value has a looser cap

3.    Enable things you couldn’t do before (e.g. creating new offering) - can be purely experimental, it has the most potential

Being clear about what you are achieving helps cut through the noise of “doing AI”, helping identify and prioritising opportunities for AI applications.

I must have seen tens of projects by now where teams tried to use AI without pausing to see that the room for added value was quite small, and resulting in months of wasted efforts. This is where I have seen most applied AI initiatives fail silently, or even worse as the main path for empty success celebration while actually not moving the needle.

Interestingly this has not changed since the days of classic Machine Learning and Data Science. And probably that’s because those are simply the core levers available to any org that wants to improve (whether using AI or not)

2. Being ready for experimentation

My experience has been that in many problems it can be difficult to figure out specifically how AI fits in a picture without trying.

Example: I worked in a project where we wanted to help prevent stealing using automated video surveillance. After building the AI models, we saw that some models did really well in certain lighting or scenery situations, but failed spectacularly in others. Some other approaches were instead able to always detect when stealing happened but at the cost of quite few false detections. What made most sense was to take this as input to product decision and find a balance depending on validated customer appetite.

One interesting consideration in the age of LLMs and agents is that adoption is already happening, whether companies plan for it or not. And each function is using it differently: a marketing team might use it to generate content, a finance team to spot anomalies, an engineering team to write code and tests.

The question is really whether the company helps channel that energy, acknowledging you need room for experimentation.

The approach can be both top-down (strategic priorities from leadership) and bottom-up (teams finding their own use cases). In my experience the best results come when you do both.

3. Finding ways to measure outcomes

You need some way of knowing whether AI is actually helping. It doesn’t have to be perfect — especially early on, many benefits are qualitative. Time saved, fewer errors, faster iteration. A quick survey, a before-and-after comparison, a simple log of time spent. Over time you can build more rigour, but in the early days, directionally right is good enough.

What matters is that teams capture these signals, even informally. This is crucial to facilitate and accelerate adoption that is actually helpful. You get to understand where to pivot, where to stop, where to go deeper.

Some practical initiatives

Given the enablers above, operationally here are some initiatives that I think are worth considering  - what makes sense would depend heavily on specifics of the company.

If starting from scratch, start with 2–3 pilot use cases. Pick problems that are real and visible. If the first experiments are on things that matter to people, it’s easier to learn from them and build on them. Be clear about what the goal of the pilot is.

A small centralised AI adoption team. Their job would be to encourage and support experimentation, share what’s working, and help teams avoid reinventing the wheel. Not large, and not there to control things. Profile of these people can vary depending on the use cases — ML engineers, AI engineers, Software Engineers, Marketers or Product Managers with a knack for agents.

AI champions in different teams and shared learning. Someone in marketing, someone in finance, someone in operations — people who are curious and willing to experiment, and who can connect the central team with their colleagues. This is an idea adopted from transformation programmes. Regular show-and-tells, a shared repository of use cases, even a Slack channel could also encourage learning.

A data platform to enable AI and monitor outcomes. This is the infrastructure piece, and depending on the maturity of the company and goal of the pilots it can benefit from different degrees of sophistication. If your use case involves distilling information as one-off (e.g. summaries, scrapers, code assistants), you should be able to get good results without much infrastructure. If you are building AI applications that learn from past data, you most likely would benefit from a system that can put this data together and structure it to help AI models and agents.

Two things that are easy to underestimate

Governance and trust. Data privacy, simple responsible use policies around when AI outputs need human review. Companies that skip this tend to hit problems when they try to scale. Getting some guardrails in place early can save a lot of pain later – those can be as simple as one paragraph in a Notion page to begin with, and periodic check-in!

Change management. In my experience the biggest blocker is usually people, not technology: I have seen fear of being replaced, a rush of doing AI without pausing to think how AI can add value, being uncomfortable with unavoidable process changes if AI gets deployed. Solution to this is always around achieving good alignment and getting buying-in..and initiating AI initiatives as an experiment, and getting people to offer inputs towards their success, can greatly help there.

Final thought

Embedding AI in an organisation is less about the technology and more about creating the conditions for people to experiment and learn.

Get leadership aligned on where the value is, give teams room to explore, and find ways to measure whether it’s working...the rest is iteration!

If you have any thoughts on this or have a different take on it, please do share!

And if you think I can help you in any way, drop me a message.

Best,

Andrea