In this Security Edge interview, Bruce Northcote, Senior Compliance Executive at the University of Adelaide, discussed the realities of securing defence research and managing AI risk in a complex academic environment.

Bruce leads the university’s defence research security program, which must meet stringent Defence Industry Security Program (DISP) requirements.

With over two decades at the institution, he balances academic independence with the need for rigorous compliance across a complex, decentralised environment.

He noted that while frameworks and governance are critical, achieving uniform compliance in a large university is nearly impossible, given the autonomy of researchers and faculties.

Within defence-linked programs, however, strict controls are non-negotiable due to the sensitivity of research data and the national implications of any breach.

When discussing the expanding role of AI agents across enterprises, Bruce often draws on lessons from much earlier in his career.

He recalls how, in January 1990, a single rogue line of code crippled AT&T’s long-distance network in the United States for nine hours.

Years later, while working at Bell Communications Research (Bellcore), he was part of a team responsible for dissecting network protocols to understand exactly why the failure occurred and how to prevent similar systemic outages.

That work required mathematically validating protocol behaviour because, as Bruce notes, trust in vendor-supplied systems was never enough.

His team regularly conducted intense “shake and bake” stress tests on switches, often still managing to break equipment vendors claimed was production-ready.

He draws a direct line from those experiences to today’s probabilistic or agentic AI models: systems that, like those old networks, cannot guarantee the exact outcome you want.

For that reason, he argues, intelligent agents should not be entrusted with mission-critical functions where verification is essential.

Discussing AI now, Bruce highlighted the role of the university’s Australian Institute of Machine Learning, one of the world’s leading AI research centres.

Yet, within business operations, Adelaide relies primarily on trusted vendor tools with embedded AI capabilities, applying them in areas such as research security to detect data exfiltration and identify anomalies.

He warned against overreliance on AI, saying that “don’t trust and still verify” must remain the principle in mission-critical contexts like defence research.

Because AI operates on probability, human oversight and validation are essential to ensure accuracy and accountability.

Bruce also discussed data classification and sovereignty as central challenges.

He cautioned that AI systems are only as dependable as the quality and classification of their training data, with mislabelled or biased datasets leading to distorted insights.

He noted that geopolitical constraints, such as restricted datasets in Chinese models, demonstrate how classification bias can influence results.

For Australia, he emphasised the need to advance AI sovereignty and ensure sensitive research data stays within national borders, reducing reliance on overseas vendors and strengthening trust in domestic AI development.

 

Key takeaways

  • Defence-grade compliance is non-negotiable: The University of Adelaide enforces strict frameworks for defence research, even as broader university-wide compliance remains complex.
  • AI requires human oversight: AI is used selectively through trusted vendor products to enhance research security, but human verification remains critical given AI’s probabilistic limits.
  • Data sovereignty shapes future trust: Ensuring data accuracy, proper classification, and local control of AI infrastructure is vital to protecting national research integrity.
Contributors
Bruce Northcote Senior Compliance Executive at University of Adelaide
Professor Bruce Northcote’s role within the Division of Research & Innovation at the University of Adelaide is to ensure the University complies... More

Less
Byron Connolly Head of Programs & Value Engagement at ADAPT
Byron Connolly is a highly experienced technology and business journalist, editor, corporate writer, and event producer, and ADAPT’s Head of Programs and... More

Byron Connolly is a highly experienced technology and business journalist, editor, corporate writer, and event producer, and ADAPT’s Head of Programs and Value Engagement.

Prior to joining Adapt, he was the editor-in-chief at CIO Australia and associate editor at CSO Australia. He also created and led the well-known CIO50 awards program in Australia and The CIO Show podcast.

As the Head of Programs, Byron creates valuable insights for ADAPT’s community of senior technology and business professionals, helping them reach their organisational and professional goals. With over 25 years of experience, he has a passion for uncovering stories about the careers and personal philosophies of Australia’s top technology and digital executives.

When he is not working, Byron enjoys hot yoga, swimming, running, and spending time with his family.

Less
data security leadership