How to build real enterprise AI capability, according to Occubuy, AMP, EBOS Group, Bain data leaders
In this Data & AI Edge panel, leaders from AMP, EBOS Group, Occubuy, and Bain examine how enterprise AI starts creating durable value through better experimentation, stronger leadership structures, and use cases that are tied tightly to business outcomes.Enterprise AI capability is not built through ambition alone.
It is built through disciplined experimentation, clear operating structure, and use cases that prove business value early.
Richard Fleming, Partner at Bain & Company, joins Kavitha Mistry, Chief Technology Officer at AMP, Artak Amirbekyan, Chief Data Officer at EBOS Group, and Dr Amy Shi-Nash, Professor at Monash University, Chief Executive Officer and Co-founder at Occubuy, and former Chief Data and Analytics Officer at Tabcorp, took the Data & AI Edge stage to to examine what actually helps organisations move AI from strategy into execution.
Key takeaways:
- Early experimentation matters when it helps organisations test real opportunities, expose risks, and shape a more grounded AI strategy.
- Enterprise AI scales more effectively when leadership backs it, ownership is clear, and capability building is structured across the business.
- The best first use cases build credibility because they are tied to real business outcomes rather than novelty or generic efficiency claims.
Experimentation matters when it sharpens the strategy
AI programs get stronger when experimentation is used to test practical value, surface risk, and narrow the focus.
Without that phase, strategy stays too abstract.
Amy makes that point through her focus on early experimentation.
She argues that generative AI becomes more useful once teams begin testing where it can unlock value and where it introduces risk.
In her case, that included working with unstructured data and tools such as knowledge graphs to surface value from information that had been hard to use.
That work helped shape a more realistic strategy with clearer guardrails.
Structure and sponsorship determine whether AI adoption scales
Enterprise AI does not scale through isolated pilots or technical enthusiasm alone.
It needs executive backing, clear ownership, and a model that supports different forms of adoption across the business.
Kavitha outlines that through AMP’s approach.
She points to direct sponsorship from the CEO, a centralised AI hub, responsible AI frameworks developed with academic partners, and a flexible technology stack designed to avoid vendor lock in.
She also makes clear that adoption has to be tailored.
Productivity tools, customer experience applications, and domain specific use cases do not all need the same operating model, but they do need leadership support and structured capability building.
The right first use case creates momentum
AI programs gain traction when the first use case is tied to a real business problem and delivers a result the organisation can recognise.
That is what builds trust and creates room for broader adoption.
Artak brings that discipline into focus.
He argues that use cases need to connect directly to business value, whether that is fraud detection, better explainability, or operational efficiency.
His examples also show that AI value does not always show up in obvious places.
Fraud detection delivered stronger explainability, while ventilation optimisation created sustainability gains.
The point is not to prove that AI works in theory, but to choose a starting point that proves where it matters.
Enterprise AI capability improves when organisations experiment early, build the right structure around adoption, and choose the first problems carefully.
That is what turns AI from activity into execution.