Over my career so far, I have seen many AI and Machine Learning initiatives succeed, some succeeding amazingly, and some failing silently.
Having worked both as a leader and as an individual contributor across a number of companies by now, I thought it would be interesting to try and distil what I saw are some key enablers to leverage AI and some possible practical steps that can follow from them.
For context: I have led AI efforts in 4 different companies across sectors and company size (from 1,300 people to smaller 100 people series A). I have also built myself applied AI models in 6 more companies, again across sectors and company size (from an IPO status Deliveroo, to enterprises like Shell and Unilever).
The three main enablers I see are:
1. Being clear on how AI can add value
2. Being ready for experimentation
3. Finding ways to measure outcomes
How does AI add value
The way I think about it, AI can help in three ways:
1. Reduce costs (e.g. automate repetitive tasks) - easiest to work out, but added value is capped
2. Improve outcomes (e.g. better predictions, fewer errors) - more difficult to work out, added value has a looser cap
3. Enable things you couldn’t do before (e.g. creating new offering) - can be purely experimental, it has the most potential
Being clear about what you are achieving helps cut through the noise of “doing AI”, helping identify and prioritise opportunities for AI applications.
I must have seen tens of projects by now where teams tried to use AI without pausing to see that the room for added value was quite small, and resulting in weeks, if not months, of wasted efforts. I have been guilty of this a couple of times early on in my career.
A few years back, I had recently joined as Head of DS and ML in a series C company. One of the key achievements presented to me was a predictive churn model, which had been requested directly from the previous CFO and which was in production and achieving 92% accuracy. I thought this was amazing! Upon looking into it, while technically the model was sound, I realized that not a single person in the company was using its outputs (and that nevertheless the team was maintaining it). What had happened was that the model was solving a problem that nobody cared about - predicting churn rate at a daily resolution was not really something actionable.
This is perhaps where I have seen most applied AI initiatives fail silently, or even worse as the main path for empty success celebration while actually not moving the needle.
Interestingly this has not changed since the days of classic Machine Learning and Data Science. And probably that’s because those are simply the core levers available to any org that wants to improve (whether using AI or not)
Experiments are key
My experience has been that in many problems it can be difficult to figure out specifically how AI fits in a picture without trying.
I worked in a project where we wanted to help prevent stealing using automated video surveillance. After building the AI models, we saw that some models did really well in certain lighting or scenery situations, but failed spectacularly in others. Some other approaches were instead able to always detect when stealing happened but at the cost of quite few false detections. What made most sense was to take this as input to product decision and find a balance depending on validated customer appetite.
One interesting consideration in the age of LLMs and agents is that adoption is already happening, whether companies plan for it or not. And each function is using AI differently: a marketing team might use it to generate content, a finance team to spot anomalies, an engineering team to write code and tests.
The question is really whether the company helps channel that energy, acknowledging you need room for experimentation.
The approach can be both top-down (strategic priorities from leadership) and bottom-up (teams finding their own use cases). In my experience the best results come when you do both.
Finding ways to measure outcomes
You need some way of knowing whether AI is actually helping. It doesn’t have to be perfect - especially early on, many benefits are qualitative. Time saved, fewer errors, faster iteration. A quick survey, a before-and-after comparison, a simple log of time spent. Over time you can build more rigour, but in the early days, directionally right is good enough.
What matters is that teams capture these signals, even informally. This is crucial to facilitate and accelerate adoption that is actually helpful. You get to understand where to pivot, where to stop, where to go deeper.
Some practical initiatives
Given the enablers above, operationally here are some initiatives that I think are worth considering - what makes sense would depend heavily on specifics of the company.
If starting from scratch, start with 2–3 pilot use cases. Pick problems that are real and visible. If the first experiments are on things that matter to people, it’s easier to learn from them and build on them. Be clear about what the goal of the pilot is.
A small centralised AI adoption team. Their job would be to encourage and support experimentation, share what’s working, and help teams avoid reinventing the wheel. Not large, and not there to control things. Profile of these people can vary depending on the use cases — ML engineers, AI engineers, Software Engineers, Marketers or Product Managers with a knack for agents.
AI champions in different teams and shared learning. Someone in marketing, someone in finance, someone in operations — people who are curious and willing to experiment, and who can connect the central team with their colleagues. This is an idea adopted from transformation programmes. Regular show-and-tells, a shared repository of use cases, even a Slack channel could also encourage learning.
A data platform to enable AI and monitor outcomes. This is the infrastructure piece, and depending on the maturity of the company and goal of the pilots it can benefit from different degrees of sophistication. If your use case involves distilling information as one-off (e.g. summaries, scrapers, code assistants), you should be able to get good results without much infrastructure. If you are building AI applications that learn from past data, you most likely would benefit from a system that can put this data together and structure it to help AI models and agents.
Final thoughts
Not surprisingly perhaps, but I have found that embedding AI in an organisation is not only about the technology but also about creating the conditions for people to experiment and learn: having clear objectives and being deliberate about the learning helps so much!
If you have any thoughts on this or have a different take on it, please do share. And if you think I can help you in any way, drop me a message.
Best,
Andrea
Member discussion: