Warning: Undefined variable $publishedDate in /srv/users/serverpilot/apps/production/public/wp-content/themes/adapt/templates/single-post.php on line 18
How the University of Adelaide’s Chief Security Officer applies defence-grade compliance to AI and research security
In this Security Edge interview, Bruce Northcote, Chief Compliance and Chief Security Officer at the University of Adelaide, discussed the realities of securing defence research and managing AI risk in a complex academic environment.In this Security Edge interview, Bruce Northcote, Chief Compliance and Chief Security Officer at the University of Adelaide, discussed the realities of securing defence research and managing AI risk in a complex academic environment.
Bruce leads the university’s defence research security program, which must meet stringent Defence Industry Security Program (DISP) requirements.
With over two decades at the institution, he balances academic independence with the need for rigorous compliance across a complex, decentralised environment.
He noted that while frameworks and governance are critical, achieving uniform compliance in a large university is nearly impossible, given the autonomy of researchers and faculties.
Within defence-linked programs, however, strict controls are non-negotiable due to the sensitivity of research data and the national implications of any breach.
Discussing AI, Bruce highlighted the role of the university’s Australian Institute of Machine Learning, one of the world’s leading AI research centres.
Yet, within business operations, Adelaide relies primarily on trusted vendor tools with embedded AI capabilities, applying them in areas such as research security to detect data exfiltration and identify anomalies.
He warned against overreliance on AI, saying that “don’t trust and still verify” must remain the principle in mission-critical contexts like defence research.
Because AI operates on probability, human oversight and validation are essential to ensure accuracy and accountability.
Bruce also discussed data classification and sovereignty as central challenges.
He cautioned that AI systems are only as dependable as the quality and classification of their training data, with mislabelled or biased datasets leading to distorted insights.
He noted that geopolitical constraints, such as restricted datasets in Chinese models, demonstrate how classification bias can influence results.
For Australia, he emphasised the need to advance AI sovereignty and ensure sensitive research data stays within national borders, reducing reliance on overseas vendors and strengthening trust in domestic AI development.
Key takeaways
- Defence-grade compliance is non-negotiable: The University of Adelaide enforces strict frameworks for defence research, even as broader university-wide compliance remains complex.
- AI requires human oversight: AI is used selectively through trusted vendor products to enhance research security, but human verification remains critical given AI’s probabilistic limits.
- Data sovereignty shapes future trust: Ensuring data accuracy, proper classification, and local control of AI infrastructure is vital to protecting national research integrity.