As the UK gets to grips with foundation models of AI, the US starts a ‘frank’ conversation on the technology's risks
CMA to explore the opportunities and risks of foundation models: On 3 May 2023, the CMA launched an initial review to establish an early understanding of how AI foundation models are evolving and to produce an assessment of the conditions and principles that will best guide their future development and use. Foundation models, which include large language models (LLMs) and generative AI, are seen to have transformative potential; however, the Government also wants to ensure that innovation in AI continues in a way that benefits consumers, businesses and the economy. The CMA’s review will focus on the questions that the authority is “best placed to address”, namely:
Examining how the competitive markets for foundation models and their use could evolve;
Exploring what opportunities and risks these scenarios could bring for competition and consumer protection; and
Producing guiding principles to support competition and protect consumers as AI foundation models develop.
Review follows the UK’s pro-innovation approach to regulating AI: While AI has “burst into the public consciousness” recently, the CMA states that it has been on the radar for some time. Still, with the scope for AI to impact a range of important issues, including privacy and human rights, the CMA considers it crucial to guide the growth of the technology in ways that ensure open and competitive markets, as well as effective consumer protection. The review follows the Government's recent AI white paper, which sets out its approach to regulating AI in order to build trust in a technology that is already contributing £3.7bn to the economy. Through the white paper, the Government has also asked regulators to consider how the development and deployment of AI can be supported against five principles, including security, transparency and fairness. The CMA is now seeking views and evidence from stakeholders by 2 June 2023, with a view to publishing its findings in September.
White House meeting marks a significant week for AI: The initial review also comes not long after the emergence of ChatGPT (an example of generative AI), with the chatbot application driving concerns among policymakers. Just a day after the CMA’s consultation window opened, senior members of the Biden Administration – including the President himself – met with the heads of Anthropic, Google, Microsoft and OpenAI to discuss the current and potential risks AI poses to national security, society and individuals, including jobs. During a ‘frank yet constructive’ conversation, Vice President Harris stated that the private sector has an “ethical, moral and legal responsibility” to ensure safety and oversight of its products. To coincide with the meeting, the US Government announced new actions to promote responsible AI, including funding for research institutes and public assessments of existing generative AI systems. Harris also outlined the potential to build on last October’s non-binding AI Bill of Rights with formal legislation to ensure all citizens are able to take advantage of the rapidly evolving industry.