Panellists questioned the long-term aims of the international AI summit series while encouraging participants to consider the future direction of the UK’s own tech policy
The AI Fringe gathered a cross-sector line-up to debrief the AI Seoul Summit
On 5 June 2024, leaders from government, industry, civil society and academia gathered for an AI Fringe conference in follow-up to the AI Seoul Summit held in South Korea last month. Similar to the inaugural AI Fringe that followed the UK AI Safety Summit at Bletchley Park, this event provided a debrief on the discussions and outputs of the latest summit and looked ahead to the priorities for the next international AI summit, which will be hosted in France in February 2025. Unsurprisingly, panellists from civil society and industry presented diverging assessments on the success thus far of the global AI summit series.
Though panellists disagreed on the success of the series to date, all agreed on the need for a more defined purpose of the summits moving forward
In discussing the progress resulting from both the UK- and Korea-led summits, panellists offered starkly different perspectives on the inclusivity of the summit as a global event as well as on the focus of the series on existential and frontier AI risks. Sam Pettit (UK Public Policy Manager, Google DeepMind) gave credit to both the range of governments (28) and companies (16) that have signed onto different sets of voluntary commitments as a result of the summit series and stressed the importance of a continuing focus on the risks of the most powerful AI models. However, civil society members, including Resham Kotecha (Global Head of Policy, The Open Data Institute) and Professor Gina Neff (Executive Director of the Minderoo Centre for Technology and Democracy, University of Cambridge), expressed frustration at the limited role played by civil society in forming these outputs and its participation in the summits more broadly. Lord Tim Clement-Jones was the most outspoken in his criticism, declaring himself a “summit sceptic” and characterising the UK AI Safety Summit as a “distraction” in its focus on existential harms as opposed to the more current or commonplace safety concerns of AI. He suggested that further summits should focus more on ethics in AI as well as international convergence in standards as a way to both provide clarity for businesses and build public trust in the technology. Across both panels, speakers expressed a related concern that the AI summit series did not have a unified goal or a well-defined place in the international landscape of AI governance. Representing the French Government’s planning body for the next summit, Henri Verdier (Ambassador for Digital Affairs, Ministry for Europe and Foreign Affairs, France) promised a greater programming focus on the “current and proven consequences of AI” as well as a discussion that is more inclusive of representation from civil society and the Global South.
While tech policy has yet to feature in UK general election campaigns, speakers still urged the future Government to be prepared to act
With the UK general election looming, speakers also reflected on the future of tech regulation domestically and the UK’s international standing in global AI governance. Francine Bennet (Interim Director, Ada Lovelace Institute) noted she was unsurprised that the issue has not yet featured prominently in the campaign, while Pettit suggested that some tech policy items may well be included in party manifestos, which are expected in the coming days. Bennet also argued that greater attention should be paid to the capacity of UK regulators to take on AI regulation, which was answered in part by Daniel Privitera’s (Founder and Executive Director, KIRA Center) praise for the EC’s ability to hire top technical talent to staff its new AI Office following the passage of the EU AI Act, suggesting the bloc can be a model for other governments. Beyond AI-specific regulation, both Clement-Jones and Kotecha were generally pleased to have seen the Data Protection and Digital Information Bill fail to pass before the dissolution of Parliament (during the “wash-up period”) and urged greater consideration be given to ensuring that the UK creates the sort of protected, high-quality data ecosystem necessary to build impactful AI systems. Most broadly, Neff set out a call to action for UK policymakers in the coming months, suggesting that a potential Labour Government must be prepared to take action on tech policy questions and take advantage of this pivotal moment in technological development.