Please enable javascript in your browser to view this site

What the joint EU, UK, US statement on AI competition leaves unanswered

Though the statement was meant to signal consistency, regulators in each of these countries face an inflection point in their approach to regulating AI

The EU, UK and US affirm their cooperation on issues related to competition and AI

On 23 July 2024, the EC, the UK Competition and Markets Authority (CMA), the US Department of Justice (DOJ) and the US Federal Trade Commission (FTC) issued a joint statement on competition in generative artificial intelligence (AI) and general purpose AI markets. This commitment to collaborating in creating fair, open and competitive markets and enforcing relevant laws to prevent anti-competitive behaviour comes as each regulator has launched new actions in recent months on AI-related competition issues. While the statement projects a sense of unity in addressing competitive harms in AI markets more readily, each regulator is also burdened by varying levels of uncertainty over their respective jurisdiction’s approach to regulating AI as well as regulating competition in the digital economy more broadly.

EU: How does a new EC coordinate AI regulation within its expansive digital rulebook?

Though the EU has seen the greatest degree of consistency in digital policymaking among its peers in recent years, the work of successfully implementing its now extensive rulebook for digital platforms will pose a challenge to the newly appointed EC. As the AI Act comes into force on August 1 2024, the newly formed EU AI Office will need to quickly conduct technical work to write the definitions and thresholds necessary to implement the safety law. Though slightly further along in the implementation timeline, the EC also remains in the early stages of enforcing both the Digital Services Act and Digital Markets Act, under which big tech firms are likely to face additional AI-related scrutiny both in the context of integrating AI services into existing platforms and controlling related services which act as critical inputs to the AI value chain, including cloud computing. The challenge of harmonising the requirements of these laws will also fall in part to new leadership as Margrethe Vestager (Executive Vice-President and Competition Commissioner, EC) ends her tenure as chief competition enforcer, during which the EC launched a consultation on competition in generative AI and a merger inquiry into the Microsoft-OpenAI partnership.

UK: Will the Labour Government pursue AI safety as the CMA takes on greater digital competition powers?

Though the newly elected Labour Government in the UK campaigned on a pledge to regulate the most powerful AI platforms, there has been no indication yet, through the King’s Speech or otherwise, of whether or how the new Parliament will pursue legislation on AI safety. There is a sense the government may be looking to buy time by seeing first how the approach in the EU and US plays out before deciding whether tighter rules are needed. However, before and during the General Election, the CMA has been uniquely active among its peers in investigating threats to competition in AI and related markets. In addition to its ongoing market study on AI foundation models, the CMA has opened five merger inquiries into partnerships between AI firms and big tech firms since December 2023. The regulator’s Digital Markets Unit has also been officially empowered to regulate competition in the digital economy with the passage of the Digital Markets, Competition and Consumers (DMCC) Act in May 2024. Given the broader scope of regulated activities set out by the DMCC, it is likely that the CMA’s work to date on AI markets will inform how the DMU pursues its first implementation actions under the law.

US: Would a change in political leadership undermine the US’s effort to emerge as an unlikely leader in digital policy?

Unlike the UK and the EU, the US has not advanced legislation on digital competition nor on regulating AI. Instead, the US’s increased attention to digital policymaking has been channelled through executive action driven by the Biden Administration. While the November 2023 Executive Order on AI instructed a whole-of-government response to responding to emerging safety challenges in AI across all sectors, the FTC is taking a more targeted look at the state of generative AI markets through its general information gathering authorities. With the presidential election looming in November, however, it is unlikely that a second Trump Administration would continue this legacy of action on AI safety and competition. Instead, the Trump campaign has already pledged to revoke the Executive Order on AI. Congressional Republicans have also been critical to date of the FTC’s cooperation with European regulators on digital issues, suggesting that even the FTC and DOJ’s participation in the joint statement on competition and AI is somewhat surprising. Though some tech policy matters, including online safety for children and data privacy, have seen more progress in this Congress than in decades prior, any eventual success on legislating these topics and insights on how the US will approach regulating the tech sector generally will be highly dependent on both presidential and congressional election results in November.