The last few weeks have been big ones for public service AI. To understand why, it's worth reflecting on how actually disruptive AI is shaping up to be for regulators.
There is broad if uneasy acceptance that AI is going to create seismic shifts in how governments deliver their regulatory programs. 'Broad', because the power of AI and the value it offers to almost every facet of public administration makes adoption obvious and unavoidable. 'Uneasy', because AI brings a long list of novel risks that, untreated, can harm people and fracture trust.
In a post-Robodebt Australia, there is very little tolerance for regtech gone wrong. Of course, the issues with Robodebt went beyond technology into matters of culture, calculus and regulatory philosophy. Still, the notion of regulatory decisions being delegated to 'robo-deciders' unimpeded by empathy remains vivid and worrying to many Australians.
Research bears this out. Last year in our benchmark survey on how Australians feel about the rise of AI, we asked Australians what worried them about an AI-enabled future. We anticipated some of the responses we received (job losses, more surveillance, and a Terminator-style downfall of humanity).
But we also saw something new - a distinct fear that smart machines taking on human roles would result in less compassionate institutions overall, creating a 'Compassion Gap'.
Robodebt - despite being a deterministic algorithm rather than true AI - came up again and again as the exemplar of this. The Compassion Gap issue reflects a strong community reservation about removing humans - and therefore human traits like empathy and kindness- from the workflows of government and society at large.
In the midst of this unease, AI continues to be woven into mainstream technology like Microsoft's Co-pilot or Salesforce's Einstein, and AI-driven analytics is being eyed as a holy grail for better regulation. And it should be.
After all, there are regulatory use cases that are well within current or near-future AI capabilities that only a few years ago would have felt like science fiction. Using well-trained AI with enough processing power that means:
The combination of masses of data, human-level smarts, and lightning-fast AI software will make truly breathtaking regulatory models possible that, in the before-times, would have needed an army of humans to operate, or would have simply been unhinged from reality.
This is the upside of AI for government, and it is compelling.
But all that's needed are one or two 'tech gone wrong' events, founded in AI bias, hallucination, model drift, unexplainable decisions, training errors, poor supervision or simply incorrectly applying a green flag - to lose public trust.
Worse, such events could harm vulnerable people, damage regulated entities, or risk the loss of irreplaceable natural resources. This is the Robo-X scenario, where a new Robodebt-like issue emerges from the perfect storm of a new powerful technology, big vision and capability that is still too lean and too fresh to cover all the bases.
Getting the right safeguards in place is urgent business.
it is this context that explains why the Digital Transformation Agency's recent unveiling of its policy for responsible use of AI in Government is such an important milestone. It gives a clear signal about the need for doing AI well from the start, and creates a (needfully) aggressive timeframe for agencies to get their AI ducks lined up.
As a rough timeline, on 1 September, the switch flicked on the policy. Within 90 days, agencies must have named one or more accountable officials for AI, who will be on the hook for making sure their agency implements the policy effectively, and for tracking and notifying DTA about high-risk use cases.
By March 2025, every government entity in the policy's scope (most of them) will need a public statement on their approach to AI adoption and use, including safeguards against public harm - and keep it under rolling review, they have also been strongly advised to train all staff in AI fundamentals, and invest in specialists for those involved in tasks like buying or making AI.
Other recommendations include setting up registers of AIs-in-use, integrating AI controls into other agency frameworks, and monitoring operationalised AIs to detect if things are going wrong - preferably before actual harm occurs.
At a glance, you may think that the effect of all this will be to slow things down. After all, the policy acts as a high-performance seatbelt, hooter and set of brakes while new drivers learn to drive the car. In fact, the exact opposite is likely true: it will speed up adoption, and nowhere will this be truer than in the space of regulation.
Creating a well-governed, culturally competent and standards-based incubator for baby AIs means that it will not be long before agencies feel empowered to conduct AI experiments and ultimately integrate AI into mainstream operations.
And this means that agencies will need to invest - truly invest- in ensuring that they stay engaged with and connected to the community, to understand their needs.
AI standards and guidelines have much to say about important AI concepts like 'safe', 'fair' and 'transparent'. but there is no standard formula for how these concepts are translated into your specific regulatory context.
Rather, these must be tailored to your regulated community, to the issues they face in complying, the culture of compliance across different cohorts, and to the needs they have when it comes to support - including for those experiencing vulnerability.
What does 'fairness' mean for people experiencing your AI-powered regulatory process?
What does 'safe' and 'unsafe' AI look like for the people, communities and organisations who will experience your AI system, directly or indirectly?
How does 'transparency' need to work in your domain to make people feel in control, and to stop them from being overwhelmed by process and information?
What level of human supervision needs to be present, and in what circumstances, to ensure that AIs are functioning not just with accuracy and lawfulness, but with compassion and empathy?
If you're accountable for a regulatory AI initiative, you probably don't know the answers, yet. Instead, you'll need to tackle these questions with a mindset that experts will know some of the answers, and those with lived experience of the regulatory system will know the rest.
The Compassion Gap is one example of an issue that is important to many Australians, yet that won't appear in any manual, standard or ChatGPT prompt response. Instead, it, and other issues like it, will emerge from engagement, co-design and early testing of AI concepts with real people in the community.
There's lots to do in building a safe, responsible future for regulatory AI: but like all technological breakthroughs, it starts with people. You should too.