As well as regulating AI, policymakers themselves are using it as part of their day-to-day work from document review to spectrum management. So far most have been slow to do so and remain cautious. International guidelines for the use of AI in the public sector could change that.
Various governments have championed taking an innovative approach to AI, and this is no different when it comes to the use of AI by policymakers. Most sectoral regulators however are taking a more cautious approach in the adoption of AI compared to this government rhetoric.
Some regulators (such as the CMA in the UK, and Norway’s transport authority) and public sector bodies (e.g. the US Patent and Trademark Office and Singapore’s civil service) have begun to utilise AI tools for non-complex, time-consuming tasks such as document review, data management and pattern recognition. Others (such as ACMA in Australia) have also considered the adoption of more complex AI tools to aid approaches to dynamic spectrum access and to create new policy materials using generative AI.
Regulators have generally been slow to adopt AI into their day-to-day work on a wide scale for a number of reasons such as the data protection and privacy risks posed by public sector AI use and the potential issues relating to tech sovereignty that arise when foreign tools and cloud services are used.
The establishment of governmental bodies and international organisations drafting guidelines for the use of AI in the public sector (the G7 for example has published guidelines) could change this. Until then, we expect most activity to be primarily focused on establishing frameworks for the adoption of safe and responsible AI
Governments are encouraging the adoption of AI for the improvement of public services
Governments around the world are keen to embrace AI to improve public services as well as drive growth and innovation. Though policymakers are primarily focused on addressing the risks associated with AI, they are also calling for it to be used in the work of these regulators. Using AI to automate the time-consuming work in public services such as evidence review and data management is seen as an important step towards growth and innovation. Though its use remains contentious, the potential monetary savings of public service automation could make it a worthwhile pursuit. One estimate in the UK suggests that savings of £12bn a year are possible. Such potential savings demonstrates why there is strong government support for regulator AI adoption.
In recent months and years there has been an expansion of resources dedicated to the use of AI by regulators. The EU’s competition authority, DG COMP, for example has been clear that new technologies require new teams of experts, data scientists, economists and lawyers. There is an understanding that new approaches and expertise are required to regulate AI and further, to adopt the technology into the work of these regulators. Similarly, in the UK, Ofcom now boasts approximately 60 AI experts, a number of whom have direct experience in developing AI tools. Though this improves the capacity of regulators to address AI associated risks, it remains unclear how resources have been or will be leveraged to support regulators’ own adoption of AI.
Regulators are not yet matching government enthusiasm for AI adoption
Although governments and policymakers have declared their ambitions for regulators to adopt AI in their work, these authorities are generally taking a more cautious approach. This is perhaps underlined by the growing body of AI regulation around the world they are tasked with enforcing. Despite Ofcom’s new technical capacity, its 2024/25 Strategic Approach to AI is still more focused on the immediate task of regulating AI and ensuring its safe use, though it does state that it is looking into how it can integrate AI into its own work. At the EU level, DG COMP has shared its plans to integrate new technologies into its processes but has not explicitly focused on AI in this regard. DG COMP describes how it will integrate new technologies though, noting that these need to be integrated into each stage of their regulatory process. This includes information gathering, analysis and decision-making. The hope of this integration is to make the process better and faster at each stage. As frameworks and ambitions for the adoption of new technologies are increasingly communicated by regulators, the favoured approach to AI still seems more risk averse.
Policymakers have begun to adopt AI in carrying out less complex tasks
There are a number of instances around the world where AI has been introduced into the regulatory process. Regulators that are adopting AI tend to do so for a limited set of similar functions such as document review, mapping and data management (see Table 1). These tasks are limited in complexity, suggesting that regulators are using AI to automate laborious but more simple work. Although, there does appear to be an appetite for AI to be implemented in handling more complex tasks such as spectrum management and generating new content.
One common use of AI by national regulators has been in mapping, data management and analysis which have been used in Canada and Germany. One of AI’s more reliable capabilities is its ability to recognise patterns and manage data, making it particularly useful in mapping. According to France’s Conseil d’État, competition authorities are considering the use of AI models to monitor pricing data in order to detect cartels formed through algorithmic collusion. Pattern recognising AI tools are also in use in Denmark, New Zealand, and the US. The Government in Canada recently awarded a contract to AI firm, Ecopia AI, to develop and analyse mapping data on the digital divide in the country. Ecopia will help to more precisely identify connectivity gaps in order to accelerate the deployment of broadband infrastructure, improving the Government’s efficiency as well as potentially its accuracy in implementing its programming. Similarly, Germany is funding the analysis and evaluation of remote sensing data, powered by AI to improve infrastructure. AI mapping tools are also being considered by authorities in Belgium in their fight against climate change.
Another of the more common uses of AI is in document review and analysis which we have found in use in Australia, New Zealand, Norway, Singapore and the US. The UK’s Competition and Markets Authority (CMA) has begun to use AI-based tools to aid its evidence review, a key aspect of its case work. In the US, AI has been used in a similar way by the US Patent and Trademark Office (USPTO) to assist examiners in adjudicating and reviewing patent applications. AI document review tools are particularly useful for these kinds of tasks because of how they can quickly scan, search, interpret, review and summarise complex texts. These AI tools have also been used by the US Department of Veteran Affairs, Norway’s transport authority, Ruter, New Zealand’s public services and by Singapore’s civil service via the Pair Suite AI which was developed by the Singaporean Government. Over 50 of Australia’s public services have also trialled similar tools.
Spectrum management and generative tools represent the more ambitious, longer term applications for regulators
Regulators are beginning to show signs of a move towards the use of AI tools that can assist decision-making and material creation. This demonstrates how AI is beginning to be used for more complex tasks with more at stake, unlike its more common current uses such as document review.
Generative AI has been one of the most popularly used AI tools by consumers around the world recently and tends to function as a chatbot, such as OpenAI’s ChatGPT. It has the ability to create new content such as text, video and audio. Although hugely popular among consumers, its potential for errors has likely kept it from being used by regulators or other public sector bodies, although we are now seeing its use in Australia, EU, New Zealand, and Singapore. In the EU, the European Commission has also recently rolled out its own AI tool, GPT@EC, similar to ChatGPT. The tool is a pilot project that has been built to assist staff in drafting policy documents. If the EC’s new tool can avoid the problems of inaccuracy, then it would be considered a useful aid in drafting policy documents.
At this stage, the use of AI to create new policy materials certainly seems like an escalation. Using AI to write policy materials in particular applies AI to numerous functions, subjects and issues as opposed to just using it for a more one-dimensional task such as pattern recognition. Generative AI like this is likely already in use by many regulators in a less apparent way through its integration into common software such as Google Search, Microsoft’s Copilot (which has been trialled by Australian public services) and Adobe’s Sensei AI. The US Patent Office, however, has recently banned almost all internal use of generative AI.
One use of AI which has not yet been implemented but has been considered is in dynamic spectrum access (DSA). It’s understood the Australian Communications and Media Authority (ACMA) is open to trialling AI to improve spectrum sharing arrangements, which could improve efficiency in spectrum usage and allocation. AI and various forms of automation could allow for time-based spectrum allocations, facilitating more precise sharing between different users by recognising when and where spectrum should be allocated. In the UK, Ofcom is also working towards implementing dynamic approaches to managing spectrum access and are open to automation as a means of doing so.
Regulators have so far been wary of adopting AI due to privacy risks and problems of tech sovereignty
The use of AI tools in public services raises concerns about privacy and data protection, reliance on foreign firms developing them, the technical capacity of systems and the varying probability of failure (see Table 2).
Given the sensitive nature of the data that regulators handle (including identifying information about citizens and sensitive industry insights), the application of AI could put this data at risk. The threat to data protection posed by AI also emphasises the importance of transparency in regulators’ work. As regulators start to use AI tools in their own work, it is imperative that they are transparent about how they do so as to not cause concern among citizens and the companies they regulate. With the varying capabilities of AI tools, this data could be at risk when being managed by automation. If for example there is an error with a specific AI tool, sensitive data could be leaked, putting consumers at risk. This also leads into the risk of regulators utilising foreign AI tools and cloud services. Using these could make sensitive consumer and regulator data vulnerable to external security threats.
The slow pace of AI adoption among regulators is also in part due to the current capability of the technology. Past automations such as the algorithmically decided 2020 UK A-Level results have shown that these new technologies don’t always work as intended, with a clear impact on those affected. The 2020 A-Level grades were decided by an Ofqual standardisation algorithm which largely allocated grades by leveraging past data which was found to be biased against certain schools and areas. Concerns over the accuracy and capability of new automations like AI tools are likely to contribute to regulator choices to limit AI adoption.
The risks related to outsourcing for AI tools as well as the potential costs of in-house development may also contribute to limited adoption so far. Unless regulators themselves can go through the costly process of building internal AI models and tools, they will likely have to rely on external models. As conversations on tech sovereignty and building domestic tech capacity and control gain traction, especially in the EU, contractings for AI functions to the limited number of scaled AI firms competing in key markets such as foundation models could also result in a greater reliance on foreign tech firms. One way in which regulators could aim to mitigate this risk when outsourcing for AI models could be through the standards applied in competitive procurement processes. By encouraging capable, domestic AI firms to compete for a public sector contract and embracing ‘policymaking by procurement’, regulators could not only play a role in encouraging domestic AI innovation but could also ensure sensitive data is well protected. Ireland’s national AI strategy has stressed the importance of innovative public procurement processes to encourage AI innovation and effective public sector adoption.
Canada’s approach to funding and contracting Ecopia AI is potentially an effective way for governments and the public sector to utilise external AI firms while protecting tech sovereignty. Although it is a private firm, Ecopia was created as a result of funding from Canada’s government. It is now being contracted to map the country’s digital divide. Ecopia and the Government have also collaborated to identify areas in Montreal with the most road damages and flood risks as well as to create high-precision maps of Canada to support government net zero initiatives.
Governing bodies are setting guidelines for public sector AI use which could promote its uptake
The G7 has recently published its guidelines on the use of AI in the public sector. These more concrete principles on the issue could potentially see a path forward for regulators and other public service bodies seeking to adopt AI into their work. One of the key focuses of the G7 document is the need to establish robust frameworks for the responsible use of AI within the public sector. The G7 sets out that these should include transparency requirements for public algorithms, regulations on automated decision-making and risk management frameworks or ethical frameworks addressing the implications of the design and use of AI models. The G7 poses that by having these legal and regulatory frameworks in place, a safe, secure and trustworthy deployment of AI in the public sector will be possible. These also discuss the need to enable a more systematic use of AI in the public sector. The guidelines explain that this can be promoted through improved data governance as well as the scaling up and replication of already successful AI initiatives. The G7 has provided regulators with a roadmap towards their own adoption of AI, and in the coming months and years as regulators do begin a more widespread adoption of AI, this framework could play a vital role in developing actionable steps to mitigating risk.
The focus of AI guidelines for public sector use mostly aligns with current regulatory frameworks for AI use in the private sector. Generally, the regulatory focus centres around safety, particularly surrounding the transparency of algorithms and the responsible use of AI. Of the countries that are regulating AI in a way that emphasises innovation rather than safety, some are taking a similar approach to setting guidelines on public sector AI use. In the UK for example, the Department of Science, Innovation and Technology (DSIT) has expanded its scope and size in order to unite efforts to digitally transform public services. Units within the department, such as the Responsible Technology Adoption Unit and the AI Safety Institute have been tasked with ensuring safety.