What does safe scale look like when the cost of getting AI wrong is measured in public trust?

In this ADAPT Insider podcast episode, Daniela Polit, Public Sector Transformation Executive, outlines a clear test for government AI. It should help solve complex public problems, reduce friction for citizens, and improve services at scale, while operating within guardrails that protect sovereignty, accountability, and trust.

Listen to the full episode on Apple Podcasts and Spotify.

Key takeaways:

  • AI should only be used when it clearly improves a public outcome, whether that means faster service, less friction, better inclusivity, or more efficient processing.
  • Trust depends on keeping sovereignty, transparency, and human accountability intact, with AI used inside closed environments and final decisions always staying with people.
  • Safer AI adoption comes from matching governance, training, and oversight to the level of public risk, rather than applying the same approach everywhere.

Public value has to come before the technology

Strong AI strategies begin with the problem being solved.

In government, that means asking whether a tool can genuinely improve a service, shorten wait times, reduce bureaucracy, or make support easier to access for citizens.

That is the lens Daniela applies throughout the conversation.

She describes AI as a way to solve complex public sector problems, especially where service delivery involves scale, complexity, and large volumes of information.

The value, in her view, comes from helping people deal with government faster and with less friction, whether that means reducing unnecessary touchpoints, improving transparency, or tailoring services more effectively across very different citizen needs.

If AI can clearly improve the outcome, it has a case. If the likely value is marginal and the risk is higher, it should not be forced in.

Trust holds when sovereignty and accountability are protected

Public sector AI needs trust built into where models run, how data is handled, and who remains responsible for the outcome.

Daniela makes that standard explicit.

She says government models are hosted in closed internal environments, with the same rules and authorisations that already apply to public data carried through into AI use.

She is equally clear that accountability stays with a person.

A tool can support a decision, accelerate a process, or structure information more effectively, but it cannot replace the accountable decision maker.

That combination, sovereignty, transparency, and human oversight, is what allows AI to be used in sensitive environments while preserving public confidence.

Safer scaling depends on risk based governance and training

AI becomes easier to scale when governance gives teams a clear way to assess value, manage risk, and move suitable use cases forward with confidence.

That is how Daniela describes the public sector approach.

She points to frameworks and assurance checks that run from ideation through implementation, testing, and evaluation, with policies evolving as use cases become more complex.

She also makes clear that training should reflect the level of public impact.

More structured education and tighter oversight are used for public facing or higher risk applications, while lower risk internal tools can be handled more flexibly.

Even in a large and federated system where collaboration across agencies is still often informal, that discipline creates a stronger filter around value, accountability, and safe deployment.

Contributors
Daniela Polit Director, Strategic Programs at NSW Department of Customer Service
Daniela Polit is a government transformation and portfolio leader with deep consulting experience and a track record turning strategy into delivery. She... More

Daniela Polit is a government transformation and portfolio leader with deep consulting experience and a track record turning strategy into delivery. She has led the evolution of an ICT PMO into a scalable Value Management Office overseeing the full investment lifecycle for a multibillion-dollar portfolio—spanning prioritisation, governance, assurance, benefits realisation and performance. Currently a Director at the NSW Department of Customer Service (and previously seconded to the Premier’s Department), she drives major shared services and strategic programs, building high-performing teams (100+), strengthening portfolio systems and processes, and mobilising centres of excellence across project management, agile and change. Known for “cut-through” on complex initiatives, she brings a holistic, outcomes-first approach that aligns senior stakeholders, energises teams, and accelerates value from significant public investment.

Less
Gabby Fredkin Head of Analytics & Insights at ADAPT
As the Head of Analytics and Insights at ADAPT, Gabby Fredkin’s primary role is managing analysis to produce ADAPT’s actionable insights to... More

As the Head of Analytics and Insights at ADAPT, Gabby Fredkin’s primary role is managing analysis to produce ADAPT’s actionable insights to identify trends supporting organisations in Australia.

With a passion for creating stories with data, Gabby is consistently rated as one of the top speakers at ADAPT’s events. In roundtable discussions, he specialises in using statistics to initiate thought-provoking discussions, enabling ADAPT’s customers to become more data-driven.​

Using modern data science techniques, he provides ADAPT and its customers with confidence in the accuracy and validity of the information used for ADAPT’s research, advisory and events.

Working across artificial intelligence, machine learning, AI ethics, DevSecOps, end-user behaviour, and human-centred design, Gabby’s vast experience continues to grow, supported in part by a Master of Business Analytics from Deakin University.

Less
transformation leadership compliance