How the Australian government uses AI to solve complex public problems safely and at scale
In this ADAPT Insider podcast episode, Public Sector Transformation Executive Daniela Polit explains how government is using AI to improve services while protecting trust, sovereignty, and human accountability.What does safe scale look like when the cost of getting AI wrong is measured in public trust?
In this ADAPT Insider podcast episode, Daniela Polit, Public Sector Transformation Executive, outlines a clear test for government AI. It should help solve complex public problems, reduce friction for citizens, and improve services at scale, while operating within guardrails that protect sovereignty, accountability, and trust.
Listen to the full episode on Apple Podcasts and Spotify.
Key takeaways:
- AI should only be used when it clearly improves a public outcome, whether that means faster service, less friction, better inclusivity, or more efficient processing.
- Trust depends on keeping sovereignty, transparency, and human accountability intact, with AI used inside closed environments and final decisions always staying with people.
- Safer AI adoption comes from matching governance, training, and oversight to the level of public risk, rather than applying the same approach everywhere.
Public value has to come before the technology
Strong AI strategies begin with the problem being solved.
In government, that means asking whether a tool can genuinely improve a service, shorten wait times, reduce bureaucracy, or make support easier to access for citizens.
That is the lens Daniela applies throughout the conversation.
She describes AI as a way to solve complex public sector problems, especially where service delivery involves scale, complexity, and large volumes of information.
The value, in her view, comes from helping people deal with government faster and with less friction, whether that means reducing unnecessary touchpoints, improving transparency, or tailoring services more effectively across very different citizen needs.
If AI can clearly improve the outcome, it has a case. If the likely value is marginal and the risk is higher, it should not be forced in.
Trust holds when sovereignty and accountability are protected
Public sector AI needs trust built into where models run, how data is handled, and who remains responsible for the outcome.
Daniela makes that standard explicit.
She says government models are hosted in closed internal environments, with the same rules and authorisations that already apply to public data carried through into AI use.
She is equally clear that accountability stays with a person.
A tool can support a decision, accelerate a process, or structure information more effectively, but it cannot replace the accountable decision maker.
That combination, sovereignty, transparency, and human oversight, is what allows AI to be used in sensitive environments while preserving public confidence.
Safer scaling depends on risk based governance and training
AI becomes easier to scale when governance gives teams a clear way to assess value, manage risk, and move suitable use cases forward with confidence.
That is how Daniela describes the public sector approach.
She points to frameworks and assurance checks that run from ideation through implementation, testing, and evaluation, with policies evolving as use cases become more complex.
She also makes clear that training should reflect the level of public impact.
More structured education and tighter oversight are used for public facing or higher risk applications, while lower risk internal tools can be handled more flexibly.
Even in a large and federated system where collaboration across agencies is still often informal, that discipline creates a stronger filter around value, accountability, and safe deployment.