Why enterprise AI grows from the grassroots when trust and agency are built in
In this Data & AI Edge session, Danny Liu, Professor in Educational Technologies at the University of Sydney, explains how AI adoption scales when people are given the trust, control, and safe conditions to experiment for themselves.AI adoption often breaks down before the technology does.
Fear of replacement, weak trust, and rigid rollout models push people into avoidance, quiet resistance, or shadow use.
At the 5th Data & AI Edge, Danny Liu used the University of Sydney’s experience to show how AI spreads further when people feel safe enough to explore it, control how it behaves, and use it to solve local problems that matter in their own work.
Key takeaways:
- AI adoption succeeds when trust, psychological safety, and agency are treated as core design conditions, not soft change management issues.
- People move beyond substitution and start creating new value when they can control what AI does, how it behaves, and where it is deployed.
- Mandates suppress experimentation. Low friction tools, risk tolerant governance, and shared examples spread AI further across the organisation.
Trust and psychological safety shape adoption before productivity does
AI enters organisations as a perceived threat long before it becomes a useful tool.
People worry about hallucinations, compliance, reputation, and whether using AI will make their own role less secure.
Those concerns do not disappear because a new platform is launched. They shape whether people engage at all.
Danny’s account of the University of Sydney makes that clear.
In a university environment, formal rules alone were never going to drive adoption.
Staff were already wary of how AI might affect teaching, student outcomes, and their own role.
The breakthrough came when the conversation shifted from AI happening to them towards AI becoming something they could shape and use with confidence.
That made trust and psychological safety an adoption issue before it became a tooling issue.
Control turns AI from a threat into a creative tool
People engage more seriously with AI when they can decide what it can access, what it should do, and how it should behave in their own context.
That control changes AI from a black box into something people can test, refine, and trust.
Danny describes that shift through the University of Sydney’s internal platform, where educators could build their own agents and mini apps, provide course resources, see how students were using them, and adjust the tools over time.
That made the technology visible and manageable. It also broadened what people built.
Once staff had a safe way to experiment, the use cases moved beyond simple tutoring into simulations, games, discipline specific learning tools, and other applications shaped by real local needs.
Control created room for creativity.
Enablement spreads AI further than mandates
AI does not scale cleanly through top down directives in complex organisations.
People comply on the surface, but the quality of adoption stays shallow when the use case is imposed and the boundaries are too narrow.
Danny is explicit about that lesson.
A centrally-driven project that required large classes to use a specific type of agent failed because educators were told what they had to build and how they had to use it.
The stronger model came from enablement instead: simple tools, room to experiment, risk tolerant governance, and enough support for people to discover useful patterns for themselves.
That approach helped more institutions adopt the platform, share good practice, and build momentum from the ground up.
It also created a more durable cultural shift, where people could see AI as something they were learning to work with rather than something being imposed on them.
Discovery mode is what organisations should aim for
The practical goal is to create an environment where curiosity is easier than fear.
Danny uses the idea of discovery mode to describe that state, where people feel secure enough to test ideas, take small risks, and learn through use rather than staying defensive or disengaged.
That is the broader lesson from his session.
Grassroots AI becomes scalable when organisations make experimentation safe, give people meaningful control, and keep governance strong without making it heavy.
That is what turns AI from isolated activity into wider capability.