As the world’s largest democracies prepare for elections throughout 2024, most have yet to regulate the use of AI in the context of misinformation
UK Parliament debated the impact of the Online Safety Act on misinformation
On 16 January 2024, the House of Commons convened a Westminster Hall debate on preventing misinformation in online filter bubbles. The session, led by John Penrose MP, was framed by concerns for a growing threat of misinformation related to the upcoming 2024 general elections to be held in numerous countries, including the UK. In his opening, Penrose acknowledged the importance of the Online Safety Act in regulating factually inaccurate information but pointed to an ongoing lack of control over the spread of factually accurate but deeply biassed information. He proposed the creation of a digital version of Ofcom’s due impartiality and due accuracy standard currently included in the Broadcasting Code. While he emphasised the similarities between algorithmic recommendations and editorial discretion, he did not go so far as to discuss any changes to intermediary liability protections for tech platforms which were largely maintained in both the Online Safety Act and the EU’s Digital Services Act (DSA). In response, Damian Collins MP detailed Ofcom’s new responsibility in ensuring algorithmic transparency and suggested that the promotion of trusted news sources, including through a news bargaining code, was an effective way to combat misinformation.
Communications and Digital Committee launched a ‘future of news’ inquiry
A day later, on 17 January 2024, the House of Lords Communications and Digital Committee launched a related inquiry on the future of news in the UK. Again framed in the context of the upcoming election year and the continued development of artificial intelligence (AI), the inquiry will examine both immediate and longer term issues faced by the news media industry. The committee identified three main questions to be addressed in the broader context of examining the sustainability of the news system:
Impartiality of public service broadcasting, especially in the context of political polarisation;
Trusted information as viewership in news declines and fears of AI-enabled misinformation are on the rise; and
Tech platforms and business models that deprioritise news content and harm revenue for media outlets.
Taken together, Parliament appears intent on preparing for the impacts of misinformation and a general lack of trust in news media on the upcoming election but faces challenges posed by platforms and emerging technologies like AI that extend beyond that immediate context.
Global policymakers are focusing on the impacts AI may have on elections
Elsewhere in the world, policymakers are tuning into the potentially vast impact that AI may have on the creation and spread of misinformation in advance of other elections to be held in 2024. The EU’s AI Act would regulate the creation of deepfakes, or deceptive content generated by AI, but enforcement won’t begin until after the upcoming election cycle, meaning the voluntary Code of Practice on Disinformation will likely be the basis of the EC’s preparations. In the US, the Federal Elections Commission (FEC) is considering a ban on the use of generative AI to deliberately misrepresent opponents in political ads. However, the regulator hasn’t given an indication of whether that rule change will occur in advance of the presidential election in November. Some platforms, including OpenAI and Google, have made adjustments to their terms of service regarding political content and AI, but as prior election years have demonstrated, a lack of established ex-ante regulation can leave the door open to a range of harms, potentially on an even greater scale than ever before as popular use of AI tools continues to grow.