In this Security Edge interview, Bruce Northcote, Chief Compliance and Chief Security Officer at the University of Adelaide, discussed the realities of securing defence research and managing AI risk in a complex academic environment.

Bruce leads the university’s defence research security program, which must meet stringent Defence Industry Security Program (DISP) requirements.

With over two decades at the institution, he balances academic independence with the need for rigorous compliance across a complex, decentralised environment.

He noted that while frameworks and governance are critical, achieving uniform compliance in a large university is nearly impossible, given the autonomy of researchers and faculties.

Within defence-linked programs, however, strict controls are non-negotiable due to the sensitivity of research data and the national implications of any breach.

Discussing AI, Bruce highlighted the role of the university’s Australian Institute of Machine Learning, one of the world’s leading AI research centres.

Yet, within business operations, Adelaide relies primarily on trusted vendor tools with embedded AI capabilities, applying them in areas such as research security to detect data exfiltration and identify anomalies.

He warned against overreliance on AI, saying that “don’t trust and still verify” must remain the principle in mission-critical contexts like defence research.

Because AI operates on probability, human oversight and validation are essential to ensure accuracy and accountability.

Bruce also discussed data classification and sovereignty as central challenges.

He cautioned that AI systems are only as dependable as the quality and classification of their training data, with mislabelled or biased datasets leading to distorted insights.

He noted that geopolitical constraints, such as restricted datasets in Chinese models, demonstrate how classification bias can influence results.

For Australia, he emphasised the need to advance AI sovereignty and ensure sensitive research data stays within national borders, reducing reliance on overseas vendors and strengthening trust in domestic AI development.

 

Key takeaways

  • Defence-grade compliance is non-negotiable: The University of Adelaide enforces strict frameworks for defence research, even as broader university-wide compliance remains complex.
  • AI requires human oversight: AI is used selectively through trusted vendor products to enhance research security, but human verification remains critical given AI’s probabilistic limits.
  • Data sovereignty shapes future trust: Ensuring data accuracy, proper classification, and local control of AI infrastructure is vital to protecting national research integrity.
Contributors
Bruce Northcote Chief Compliance and Chief Security Officer at University of Adelaide
Professor Bruce Northcote’s role within the Division of Research & Innovation at the University of Adelaide is to ensure the University complies... More

Less
Byron Connolly Head of Programs & Value Engagement at ADAPT
Byron Connolly is a highly experienced technology and business journalist, editor, corporate writer, and event producer, and ADAPT’s Head of Programs and... More

Byron Connolly is a highly experienced technology and business journalist, editor, corporate writer, and event producer, and ADAPT’s Head of Programs and Value Engagement.

Prior to joining Adapt, he was the editor-in-chief at CIO Australia and associate editor at CSO Australia. He also created and led the well-known CIO50 awards program in Australia and The CIO Show podcast.

As the Head of Programs, Byron creates valuable insights for ADAPT’s community of senior technology and business professionals, helping them reach their organisational and professional goals. With over 25 years of experience, he has a passion for uncovering stories about the careers and personal philosophies of Australia’s top technology and digital executives.

When he is not working, Byron enjoys hot yoga, swimming, running, and spending time with his family.

Less
data security leadership