Please enable javascript in your browser to view this site

AI, disinformation and elections in 2024

As more than 60 countries head to the polls, regulators and tech firms alike are concerned with the impact AI could have on the spread of disinformation. We consider the experience in elections so far and the policy response aimed at securing votes.

  • Since the 2018 Cambridge Analytica scandal, advances in artificial intelligence have accelerated both the creation and circulation of disinformation online. Prominent deepfake incidents in Slovakia, the UK and the US have raised concerns about disinformation during elections.

  • Disinformation operates on the user-level by influencing beliefs and behaviours but, in the most extreme cases, can more broadly undermine trust in electoral processes. Hostile foreign nations have been found to employ a variety of disinformation tactics, including creating fake media reports and running astroturfing campaigns, both of which are amplified by AI, to undermine elections abroad.

  • The commonly expected surge in AI-powered disinformation has not yet materialised during this election year. While Taiwan experienced a verifiable increase in disinformation, including campaigns originating in China, large democracies such as the EU and India do not appear to have experienced major disinformation incidents leading up to their votes.

  • Regulators have adopted a variety of policy responses to disinformation, largely influenced by the online safety regime in their jurisdiction. While the EU, Australia and South Africa have leveraged their online safety laws to limit disinformation at the platform level, Canada and the US have focused regulatory responses on policing the behaviour of individual users as well as political groups and campaigns.

  • Tech companies have also launched new features in response to the development of AI as well as in preparation for elections. However, these efforts almost always rely on users to take responsibility for seeking additional context or other information to dispute and understand the potential disinformation they encounter.

The growth of AI has deepened concerns about election disinformation and influence campaigns

When the platform economy was first developing in the earlier decades of the 21st century, new technologies such as social networking and digital advertising were cheered as tools to improve social cohesion and spread democratic values. Now only a few years on from these early assessments, it has become clear that these digital tools can just as easily be used to negatively influence social and political systems. As revealed in a series of inquiries into Facebook and Cambridge Analytica in 2018, the use and alleged abuse of personal data powered the hyper-specific targeting of political content to users, including disinformation (willful and intentional) and misinformation (unknowing). While this process was aided by the algorithms that automate digital advertising and control social media feeds, the rapid growth in other applications of artificial intelligence has come to dominate discussions around the security and sovereignty of the 2024 election year during which at least 64 countries and half the world’s population go to the polls.

Though targeting and promotion practices for content remain a concern for the spread of disinformation online, the rise of generative AI has driven widespread fears around the ease of creating convincing disinformation, including deepfake (false but realistic representations of real people and events) content. In 2023, a viral deepfake audio file of Progressive Slovakia party leader Michal Šimečka served as an early warning of the content likely to emerge in other upcoming elections. In the UK and the US, deepfakes of Labour Party candidates as well as President Joe Biden have already stoked fears around the influence that AI-created disinformation could have on electoral processes. Analysing reporting from votes already held in 2024, we detail the potential threats posed by AI-driven disinformation, discuss whether these harms have materialised in elections to date and detail the policies and plans of global regulators and tech firms to mitigate these issues.

Disinformation can influence voters opinions on candidates and current events as well as undermine trust in electoral processes

Most concerns related to the influence of disinformation begin at the user-level but scale up to broader fears about the sovereignty of entire elections (see Figure 1). On an individual basis, disinformation works to manipulate the beliefs and behaviours of voters and is particularly effective when it engages confirmation biases. While disinformation can be created and spread within domestic political contexts, a number of democracies including the US and the EU have expressed continued fears about the use of disinformation in foreign malign influence operations by countries including Russia, China and Iran. Some of the common tactics used to influence voters detailed by the US Cybersecurity and Infrastructure Security Agency include the creation of fake media outlets as well as fake accounts, the generation of fake content that is highly targeted to specific groups and the manufacturing of the appearance of widespread support for certain false ideas or conspiracies, also known as “astroturfing”.

At its most dangerous, disinformation can not only alter voters' ideas about candidates or current events but also undermine the integrity of an entire election. As in the case of the deepfake audio of Michal Šimečka in Slovakia, disinformation can take the form of fake information from party leaders or government officials and suggest evidence of voter fraud or other electoral tampering. Disinformation can also directly discourage voters from participating in elections, as happened with the deepfake audio of President Biden released during a presidential primary contest this year. These efforts to undermine voters’ trust in elections have even resulted in violence targeting the officials responsible for running elections, causing regulators, including Electoral Council of Australia and New Zealand and the eSafety Commission, to consider strategies to better address the spread of this content. In these most extreme cases, disinformation can therefore pose a broad danger to the sovereignty of democratic countries.

The expected surge in AI-powered disinformation is yet to materialise

Leading into the 2024 election year, the increased threat of disinformation as a result of new AI technologies dominated news cycles and political debates. While some countries did experience documented increases in disinformation, a broader and global surge in falsehoods circulating online is yet to materialise. In Taiwan, where elections were conducted in January 2024, academic research found that China had targeted Taiwanese voters with disinformation, including deepfake videos suggesting vote tampering by election officials. However, in India, AI-generated posts tended towards satire, mockery or other more benign forms of content that weren’t made to appear realistic. In the final lead up to EU parliamentary elections held in June 2024, the European Digital Media Observatory also reported that no major disinformation incidents took place, though a range of other false content did circulate less prominently earlier in the campaign. While disinformation on a variety of topics was circulated throughout many voting countries so far, it appears that the scale and the impact of these trends has not achieved the apocalyptic levels once feared.

A less pronounced bump in disinformation could be attributed to a number of causes. As users spend increasing proportions of their time online, they may become more savvy in navigating the authenticity of the content they encounter, as evidenced by rising levels of digital literacy. Additionally, with major democracies including the UK, the US and France yet to vote, some of the most highly anticipated and contentious contexts for disinformation may still emerge. From a more promising perspective, however, the relatively limited observed impact of disinformation on elections so far this year may be evidence of the early success of policies put in place by both governments and tech firms to protect the integrity of votes.

Frameworks to regulate disinformation tend to be supported (or limited) by online safety laws

As detailed in our Disinformation Policy Benchmark within our Platforms and Big Tech Tracker, governments and regulators around the world have proposed and enacted a range of policy responses to disinformation in the past five years. Most recently, these policies have been amended to also encompass specific responses to the use of AI in creating false or misleading content as well. Though these efforts each target the same phenomenon, the limits of a jurisdiction’s existing legal framework for online safety and content moderation tend to dictate the reach of its disinformation regulation. Among countries which have adopted policies specifically aimed at disinformation, regulation tends to either target individual users who may create or share disinformation, the platforms on which disinformation is hosted or political campaigns and groups creating advertisements and sponsored electoral content (see Table 1).

In countries where online safety legislation already exists, policies targeting disinformation tend to resemble or simply extend those frameworks. Since online safety regimes tend to target the behaviour of platforms and not individual users, disinformation rules in these jurisdictions also tend to focus on obligations for platforms. In the EU, the Digital Services Act was extended to include additional obligations for regulated platforms regarding mitigating disinformation and responding to potential harms of generative AI in electoral contexts. The Combating Misinformation and Disinformation Bill currently under consideration in Australia also borrows from the structure of the Online Safety Act in requiring platforms to abide by a code of practice on disinformation. While Brazil has not yet passed PL 2630 – also known as the Fake News Bill – the framework would introduce obligations for platforms to proactively moderate both illegal content and disinformation in a structure similar to the Digital Services Act. The South African Films and Publications Act requires platforms to submit disinformation mitigation plans to the Film and Publication Board, which regulates online safety as well, although the law also assigns penalties to individual users found guilty of posting some types of disinformation related to hate speech or the incitement of violence. As opposed to attempting to stifle disinformation at the source by targeting those creating posts, these platforms-based approaches to limiting disinformation can also be more effective at limiting the reach of foreign malign influence campaigns which may evade the reach of laws which police users or political groups.

A lack of existing content moderation laws or a limited reach of online safety laws tend to limit the statutory footing on which regulators can write disinformation policies. As a result, governments have attempted to address disinformation instead through criminal codes, which target the behaviour of users, or election or campaign laws, which place limits on the conduct of political groups. Amendments to Canadian election law introduced criminal liability for users found guilty of posting disinformation related to a specific candidate’s identity or status in a given race. The Protection from Online Falsehoods and Manipulation (POFMA) Bill in Singapore also criminalised the communication of “false statements of fact” and created penalties for users found to disseminate disinformation with bots. In the US as well as Taiwan, election laws have instead been leveraged to target the behaviour of campaigns and political groups as potential culprits of disinformation. Under a set of pending rules proposed by the Federal Election Commission, American political advertisers could be prosecuted for using AI-generated content, including deepfakes, to wrongfully impersonate other candidates.

Tech firms own responses to disinformation place a greater degree of responsibility on users

Given the level of attention paid to political disinformation following elections in 2016 and now in advance of 2024 votes, it is unsurprising that many big tech firms have been vocal in promoting the work they’ve done to attempt to limit disinformation on their platforms. While many firms have announced general safeguards imposed on new AI services, others have also invested in election-specific content and features.

The Content Authenticity Initiative and the Coalition for Content Provenance and Authenticity represent the leading cooperative effort among tech firms to standardise transparency around AI-generated content. Founded by Adobe and joined by giants including Microsoft, Google, Amazon, Qualcomm and TikTok, the effort has produced the Content Credentials system, an automated watermark that details the conditions under which content was created, including whether it was generated or modified using AI. Both Google and Amazon announced similar watermarking procedures to be included in their AI image generators. Meta also announced a number of updates related to the labelling of AI-generated content across its platforms. Though the unity around transparency achieved by the Content Authenticity Initiative and other similar policies is impactful, content labelling won’t limit the amount of false content on platforms and places the onus on users to view and digest the linked information about the content they are viewing.

Specific to the 2024 election year, firms have also announced more targeted measures to address electoral disinformation on their platforms. In a proactive step, Google unveiled a so-called “pre-bunking” campaign to serve users with educational ads that introduce factual information about topics which are commonly targeted by disinformation, such as migration. OpenAI and TikTok also announced the launch of “election centres” where users can find factual information about upcoming elections within platforms. Again, however, users must seek out further information to discount or contextualise the information in their feeds, shifting responsibility from the platform onto the individual. These interventions based on user empowerment can limit regulatory burden amidst rising levels of digital literacy. However, the increased sophistication of disinformation, both through technological advances like AI and the ongoing work of foreign influence, will only continue to test the capabilities of users to navigate online spaces.