Back to Insights

Choosing the right use case for Generative AI in government

The strength of generative AI is in practical applications that significantly augment workflow processes. These use cases, often found in operations, service delivery and compliance, save time and drive cost savings by freeing staff for higher-value creative work and quickly inform the complex decisions organisations must make.
Related Topics:
Rethinking work
16 November 2024
Lee Rose - Synergy Group Partner, James Fernance - Executive Director, and Alex Hatch - Manager
8 minutes
Generative AI: The Dream

The increasing availability of compute power and vast data holdings has allowed generative AI to evolve from simple, deterministic algorithms to complex probabilistic models. It has quickly demonstrated exciting human-like abilities to understand natural language queries, respond with helpful data analysis, and generate entirely new audio-visual content, creating opportunities for the government to reshape how they deliver high-quality services safely and personalise customer experiences.

 

The Despair

Like all statistical and machine learning techniques, the efficacy of generative AI is dependent on the volume and quality of data fed into it.

There is an inherent tension in training a generative AI model to be as valuable as possible while navigating the ethical and legal constraints that apply to the data it needs. Publicly available models trained using large quantities of freely available data, including copyrighted artistic works, sensitive personal information, and illegal material, have exposed organisations using them to financial and reputational damage and legal sanction where this data has surfaced in their outputs.

Generative AI models will also present their output as valid regardless of whether its training data is accurate, incomplete or biased. In some cases, AI “Hallucinations” are outputs when the model perceives patterns in data that are nonsensical to human observers. In other cases, generative AI has been used to deliberately create plausible text, images and audio that manipulate truth in media, politics and advertising.

 

Using Generative AI safely

Understanding training data defines the scope of what is possible. Identifying suitable training data requires considering the classic data quality dimensions, including accuracy, bias, and completeness, and constraints on its use within the organisational legislative, policy, and regulatory environment. Organisations should exhaustively test candidate training data with the organisation’s governing and regulatory groups, which is particularly important when a training model requires publicly available or purchased data.

Generative AI models also require new data and feedback. Otherwise, they become less accurate over time, and the risk of inaccurate and misleading output increases, raising additional risk of its use in the organisation. AI leaders should plan for the need to continually measure a generative AI model’s accuracy and suitability and continue the training over time.

Generative AI’s probabilistic nature and inherent lack of transparency make it difficult to determine how it arrives at its output or explain its reasoning. Decision makers and those staff incorporating generative AI into their day-to-day must understand these limitations and become AI literate.

Failure to successfully navigate the Generative AI hype cycle can lead to damaged reputation, negative social impact or even legal action. AI Leaders must ensure that potential generative AI use cases are anchored to specific business needs and aligned with its capabilities.

 

Case management and delivery

The strength of generative AI is in practical applications that significantly augment workflow processes. These use cases, often found in operations, service delivery and compliance, save time and drive cost savings by freeing staff for higher-value creative work and quickly inform the complex decisions organisations must make.

Case management is inherently a business function executed by knowledge workers, skilled experts who use their experience and knowledge to work with processes and policies to deliver a service that could have severe negative consequences if not done with sound human judgment.

Process workflow steps, as supported by case management, are discrete and modular, which lends themselves to integration with generative AI, which produces outputs that can be generated and attached to the case under human oversight. These outputs can be used then to augment the human understanding of the case and related decision-making and help tailor their interactions to the specific needs of their clients and stakeholders.

Possible applications within case management include:

  • Automated sentiment analysis and classification of voice and email to triage communications with a client.
  • Generation of next-step recommendations based on client profile, interactions to date and similar cases.
  • Review and categorisation of supporting case material leading to summarised case notes to get the case worker up to speed.
  • Design of policy and process by finding patterns and anomalies through case histories, family and cultural clusters and change modelling.
  • A generative AI experience to guide clients through a service, providing personalised and contextualised responses to citizen questions.
  • Training of caseworkers through the easy generation of synthetic cases based on historical data.
  • Adhoc plain text queries from integrated datasets to answer questions across an integrated view of a client.
  • Draft Report generation at the closing of a case.

These use cases require the considered use of governance and management of data used in training a generative AI model and its output. They still need mature process design and cultural change elements to be truly effective.

 

Broader considerations

The rapid evolution of generative AI is outpacing existing legal frameworks, complicating AI’s governance and ethical use. Government AI Leaders should maintain a clear view of the use of generative AI in their organisation and its place in their organisation’s legislative and policy environment to remain agile and responsive as new guidance emerges.

Generative AI is a powerful tool. It can save time, ideate novel problem approaches and shortcut laborious data insight processes. However, AI leaders must consider its place in their data and analytics ecosystem and the broad set of risks surrounding its use.

Related Insights
Insight

Navigating new ground with AI: Supporting health in the military

There is no shortage of opportunities for artificial intelligence (AI) and machine learning (ML). Still, we need to stop thinking of them only as tools for automation, operational uplifts and efficiency gain. There are opportunities that extend beyond the day-to-day, explore new ground, and open up areas that are not obvious, yet deliver significant value and benefit. These opportunities can be found in almost every industry, and by way of example, there have been exciting advances in AI and ML supporting mental health in the military.
Read More
Insight

Frivolous & Vexatious find a new friend (Unreasonable) through Australia's privacy reforms

First published in the LexisNexis Privacy Law Bulletin issue 2024 20(10), the article was (less whimsically) entitled "Expansion of Data-Subject-Access-Request (DSAR) Rights Under the Privacy Reforms." The article examines how lawyers and privacy practitioners may be able to rely on a concept from Freedom-of-Information (FOI) law (unreasonableness) to guide them in advising on practical strategies to manage expanded DSAR rights under Australia's privacy reforms.
Read More
Insight

The power of positive leadership in procurement

Ah procurement! The term often strikes fear into the hearts of every (or at least most) public servant. Adding to these fears are perceptions of complex regulatory and policy frameworks as well as the certainty of an audit. As discussed in previous articles, procurement isn't done well, in general in the APS (or in State/Territory Governments for that matter). However, it doesn't have to be this way - and (most) public servants need not 'live in fear' of procurement. In fact, the opposite should be true - they should embrace procurement, banish the fears and develop a positive approach to procurement and contracting.
Read More