The white paper advocates an approach that avoids heavy-handed legislation, while the rapid rise of ChatGPT has demonstrated the kind of challenges that lay ahead
The Government’s five pillars of future AI regulation: In late March 2023, the UK Government unveiled a white paper that sets out its approach to regulating AI in order to build public trust in the technology that is already contributing £3.7bn to the economy and to help businesses to innovate, grow and create jobs. Five non-statutory principles underpin the ‘national blueprint’, which will guide the use of AI across the UK:
Applications of AI should function in a secure, safe and robust way where risks are carefully managed;
The development and deployment of AI should be suitably transparent and explainable to match the risks it could pose;
The use of AI should be compliant with UK law and must not discriminate against individuals or create unfair commercial outcomes;
There should be appropriate oversight of AI applications and accountability for the outcomes; and
People must be able to dispute harmful outcomes or decisions generated by AI.
UK places faith in (and burden on) sectoral bodies in contrast to the EU’s approach: The Government sees AI as one of the five ‘technologies of tomorrow’ and is keen to harness its potential to unlock productivity and growth, while promoting responsible innovation. As AI continues to develop, it’s considered paramount to create an environment for it to flourish safely that also mitigates future risks for privacy and human rights. To achieve this, the white paper advocates an adaptable approach to governing AI that avoids heavy-handed legislation. Instead of establishing a new single AI regulator, the UK will seek to lean heavily on existing ‘world class’ authorities to devise tailored, context-specific guidance and rules that reflect how AI is actually being used in their sectors. Though the Government notes the potential for legislation to ensure regulators consider the five principles consistently (and only when parliamentary time allows), there is a clear contrast with the centralised nature of the EU’s draft AI Act, under which the EC would have an enforcement role, including the power to impose financial penalties for non-compliance.
Data protection agencies launch investigations into ChatGPT: The UK intends to allow for its rules to change as this fast-moving technology develops, ensuring protections for the public without holding businesses back from using AI. The scope for such rapid evolution has been evident in the recent emergence, widespread application and even subsequent prohibitions of ChatGPT since late 2022. Not long after becoming the latest AI sensation, ChatGPT was hit with a temporary ban in Italy on the grounds that it could violate the General Data Protection Regulation (GDPR). The CNIL in France and the AEPD in Spain have also opened investigations into complaints made against the chatbot, further demonstrating the appetite of European data privacy bodies to intervene in the absence of a finalised AI Act. Despite the concerns, witnesses at a recent Science and Technology Committee hearing in the UK urged national investment in large language models (LLMs) that underpin generative AI (e.g. ChatGPT) to build ‘sovereign capability’ and help the country keep pace with peers.