Assessment is becoming the real AI challenge for universities, says the University of Sydney’s Interim CIO
In this ADAPT Insider episode, Kerry Holling explains how universities are responding to student led AI adoption by building practical guardrails, rethinking assessment, and protecting trust across learning and research.AI pressure is hitting universities differently from most organisations.
It is being driven from the ground up by students, academics, and researchers already testing where AI helps and where it starts to distort learning.
Kerry Holling, Interim CIO at the University of Sydney, explains how that pressure is changing governance, teaching, and trust across the institution.
Listen to the full episode on Apple Podcasts and Spotify.
Key takeaways:
- Student behaviour is forcing universities to move faster on AI governance, with guardrails that protect privacy, sovereignty, and research integrity without slowing useful experimentation.
- Assessment design is becoming a bigger challenge than tool access, as universities work out how to test real understanding in an AI enabled learning environment.
- Trust in university AI depends on balance, with enough freedom to support research and learning, and enough control to protect rigour, accountability, and public confidence.
Universities need guardrails that people will actually use
AI governance in universities has to work in the real world. If the controls are too rigid, staff and students will route around them.
If they are too loose, privacy, data sovereignty, and research integrity are exposed.
That is why the University of Sydney has focused on practical guardrails developed jointly across IT and Legal, with self assessment tools that help staff judge use cases without turning governance into a bottleneck.
Kerry’s point is that balance matters more in a university environment because academic work depends on openness and experimentation, while the institution still has to protect sensitive data, intellectual property, and research quality.
The goal is to create enough structure to support safe use without shutting down the value AI can bring to research, teaching, and operations.
The bigger teaching challenge is no longer access to AI, it is assessment
The hard question for universities is no longer whether students will use AI.
The real issue is whether assessment still measures understanding, judgement, and learning in an environment where AI can generate convincing outputs quickly.
That is where Kerry sees the pressure building.
He argues that AI can improve learning when it helps students deepen their understanding, but weak assessment design will invite shortcuts instead.
The stronger institutional response is to rethink how knowledge is tested so students still have to demonstrate real comprehension.
He also points to examples where AI is improving access to teaching rather than undermining it.
One University of Sydney academic built Cogniti to replicate parts of one to one support at scale, and Kerry says it is now used by more than 5,000 academics to help develop curriculum material and provide more personalised tuition to students.
For him, that is what useful AI in education looks like, expanding learning support in places where human access is naturally limited.
Trust grows when AI is used to augment people, not displace judgement
Universities will get more value from AI when they treat it as a tool for augmentation, not as a substitute for human thinking, academic rigour, or institutional accountability.
Kerry is optimistic about AI’s potential, but he is careful about how far that optimism should go.
He supports AI for personal productivity and sees genuine value in tools that accelerate research, improve learning outcomes, and reduce friction in university work.
At the same time, he is wary of over-dependence, concerned about the concentration of power in large technology companies, and clear that institutions should be selective about how they deploy AI.
He describes it as something that should be used with respect, not submission, and that framing matters.
The trust challenge in higher education is not only about policy.
It is also about making sure AI strengthens human capability, protects academic judgement, and earns confidence as adoption becomes more embedded.