With the EU facing a long road ahead towards implementation, there remains scope for other jurisdictions to shape the global regulatory landscape on AI
EU strikes world-leading agreement on AI Act
On 9 December 2023, the European Parliament and the Council of the EU reached political agreement on the AI Act. Proposed by the EC in April 2021, the act represents the first binding piece of legislation for regulating AI in the world. Agreement was reached following a final interinstitutional negotiation that lasted over 36 hours, which itself was preceded by weeks of technical work and unofficial exchanges between MEPs and the Spanish presidency. Broadly, the regulation adopts a risk-based approach by assigning safety obligations to both AI developers and deployers of targeted and general purpose AI (GPAI) systems and models. Further technical work will continue to complete the final legislative language, while the agreement also awaits formal adoption by Parliament and the Council. Once the AI Act becomes law, provisions are expected to take 24 months to be fully enacted across Member States.
Negotiations focused on GPAI, foundation models and biometric recognition
Leading up to the final trilogue session (that started on 6 December), negotiations stalled due to disagreements between Member States and MEPs on how to regulate foundation models and GPAI systems. The political agreement reflects a similar approach to a compromise proposed by the EC in the final days leading up to the trilogue, with specific, additional rules agreed to for “high-impact” foundation models and GPAI that possesses a “systemic risk”. Parliament’s co-rapporteurs also faced pressure from centre-left MEPs to uphold firm restrictions on the use of biometric recognition systems, like facial recognition, by law enforcement and national security services. While limits were placed on the use of these technologies for some instances of predictive policing and practices of inferring certain sensitive data about individuals, the agreement does grant law enforcement officials the ability to deploy biometric identification in specific circumstances related to the prevention and investigation of serious crimes. Similar recognition systems were also banned in workplaces and educational institutions.
Marathon trilogue also settled a number of smaller but still important items
While the regulation of foundation models drove public debate leading into the final negotiation session, a number of other issues have now reached a landing zone. Agreements on them reflect a greater alignment with MEPs’ position leading into the meeting:
An EU-wide AI Office, which will be tasked with oversight responsibilities for the most advanced AI systems;
Fundamental rights impact assessments will be mandatory for all high-risk AI systems;
Energy efficiency reporting has been included as a requirement of high-impact foundation models; and
Maximum penalties were set at €35m (£29.9m) or 7% of a company’s global turnover, with proportionate caps for SMEs and start-ups.
Negotiators also finalised the conditions for the deployment of regulatory sandboxes throughout Member States to support the growth of EU-based AI firms, while maintaining compliance with the new regulation.
Focus turns to how the AI Act may impact global approaches to AI regulation
Even as the EU claims a first-mover advantage in developing regulation for AI, jurisdictions around the world are accelerating efforts to address both the harms and the innovation opportunities relating to the technology. Through an Executive Order, the US has launched a whole of government response to the risks of AI, although the White House remains limited in its ability to change the practices of private firms without Congressional action. Despite hosting a summit on the long-term – and supposed existential – risks of emerging technologies, the UK instead appears committed to fostering innovation in AI domestically, with less attention paid to risk management. While the EU faces years of preparation to enforce the AI Act, quicker progress made by these and potentially other jurisdictions may well influence the global outlook for AI regulation given the porous borders of emerging technology.