The CMA’s early and active agenda for addressing the competitive risks posed by AI stands in stark contrast to the Government’s more pro-innovation approach
The CMA releases its strategic approach to AI, detailing its world-leading efforts to address potential harms to competition posed by the technology
On 29 April 2024, the Competition and Markets Authority (CMA) published its strategic approach to AI, as requested by the UK Government’s AI white paper. The plan details the relevant risks posed by AI, the CMA’s capability to respond to those risks and the ongoing work of the regulator to promote competition in relevant markets. In addition to anti-competitive risks within the AI value chain, the CMA outlines a number of potential harms consumers could face with broader AI deployment across other markets. The regulator cites algorithmic price setting, personalised pricing and misinformation-driven consumer fraud as ways in which AI will impact competition across the whole of the economy. The CMA concludes by previewing forthcoming research on AI accelerator chips, consumer understanding of foundation models and the interaction of data protection, consumer safety and competition in foundation models. Noting its work on investigating the cloud market as a key input to AI, as well as the early launch of its work on the AI value chain, the CMA has become an international leader in addressing the threats to competition posed by AI, a position reflected and expanded on through this strategic approach. The CMA’s work also stands out in comparison to the lighter touch, innovation-centric approach the UK Government has adopted so far on regulating AI.
The dominance of GAMMA firms underlies a set of interconnected risks in the foundation models market
As a key pillar in its early work on the AI value chain, the CMA’s ongoing study of the market for foundation models has unveiled a sprawling roadmap for potential regulatory intervention to preserve or develop further competition. In an update paper published in April 2024, the CMA identifies three related risks posed by the structure of the foundation models market:
Firms that control key inputs, such as data and computing power, could restrict access in order to limit competition downstream;
Powerful incumbents could use their dominance in downstream consumer markets to distort choice in the deployment of foundation models; and
Partnerships could be strategically leveraged to reinforce or extend market power throughout the value chain.
These harms are described in the context of the dominance of American big tech companies, collectively referred to as GAMMA (Google, Apple, Microsoft, Meta and Amazon), across various inputs and outputs of foundation models, as well as in the market for the models themselves. In developing safeguards against or remedies for these risks, the CMA lays out a set of core principles to govern AI markets: access, diversity, choice, fair dealing, transparency and accountability.
The regulator’s invitation for comment on partnerships in AI answers a risk in the foundation models market
Working from the harms articulated through the foundation models study, the CMA has also opened an invitation for comment on partnerships in AI. In announcing the beginning of its information gathering process, the regulator describes its concerns with the partnerships developed between big tech firms and smaller challengers, and specifically notes the relevance of big tech hiring practices as potentially relevant to UK merger rules. While a number of global competition regulators have already begun work investigating the relationship between Microsoft and OpenAI, the CMA is now focused on Microsoft’s relationships with both European AI champion Mistral AI and American firm Inflection AI, as well as Amazon’s partnership with Anthropic. The announcement further notes that these three relationships only make up a small fraction of the “interconnected web of over 90 partnerships and strategic investments” identified in AI markets.