Approaches to regulating AI

Approaches to regulating AI

AI Tracker

Ex-ante Regulation benchmark updated to include the Australian Government's proposal paper on regulating high-risk AI

We recently updated our Ex-ante Regulation benchmark within our AI Tracker to include a proposal from the Australian Government on the future regulatory direction of AI in high risk settings. Australia aligns itself with other international efforts, such as in the EU with its AI Act, in taking a risk-based approach to regulating AI. So far we’re aware of 20 attempts to regulate, although 10 are still at the proposal stage (including Australia). Of the total, 12 can be classified as prioritising safety and security, with 8 looking to promote innovation. 

Australia’s proposal sets out 10 regulatory guardrails for high-risk AI, including mandatory testing and post-deployment monitoring, enabling human intervention, informing users of AI processes at work and providing an appeals or challenges process for the use of AI or the decision rendered by an AI system. It also identifies General Purpose AI (GPAI) as unilaterally high-risk due to how it can be applied in unforeseeable contexts that it was not originally designed for.

The proposal cites adopted and proposed GPAI guardrail provisions from the US, EU and Canada. Australia also draws on Canada’s proposed regulation for its own definition of GPAI. The Canadian-inspired definition focuses on the capability of GPAI models. This is similar to the EU AI Act’s classification that GPAI models should be considered to have systemic risk if they have ‘high impact capabilities’. The proposed definition looks at whether the AI model is capable of being used or adapted for use for a variety of purposes. In doing so, more AI models are likely to be classified as ‘high-risk’ in Australia than under the EU’s AI Act.