ATLANTISCH PERSPECTIEF | ANALYSIS
Electoral integrity is at stake in Super Election Year 2024
The imperative of technological democracies to counter AI manipulation
Jochem de Groot
Powered by generative AI (GenAI) that is now widely available, 2 billion voters around the world could be vulnerable to unprecedented disinformation during the many elections that will be held in 2024. GenAI even allows for new forms of manipulation that could go beyond the impact of fake news. For transatlantic democracies, the stakes to contain the technology are high. Not only to assure resilient and fair elections at home, but also to maintain credibility as technological democracies in the face of China’s growing global influence.
After US tech firm OpenAI launched its large language model (LLM) ChatGPT 3.5 at the end of 2022, the fast-paced advance and global deployment of generative AI (GenAI) technology was one of the defining developments of 2023.
The potential of GenAI to transform societies and economies is significant. It can spur innovation, reduce costs, increase efficiencies and provide novel solutions to the world’s most pressing problems. But there is also growing anxiety and evidence that the technology is already having profound negative implications, including in the realm of reduced privacy, increased wealth disparity, copyright infringement, increased bias and loss of reputation, amongst other issues.
Super Election year 2024
Some of these concerns are relevant in the context of the impact GenAI can have on democratic systems around the world. Shortly before the Taiwan general election on January 13th the use of GenAI applications to refine and diversify campaigns of so-called cognitive influencing of Taiwanese voters via social media were attributed to Chinese state actors. And just this week, there were reports of voters in the US state of New Hampshire receiving AI-generated robocalls impersonating US President Biden. The cloned voice told recipients not to vote in the state’s presidential primary on January 23rd, clearly an unlawful attempt to disrupt the New Hampshire Presidential Primary Election.
And the 2024 election season only just got started. Dubbed the Super Election year, national elections are planned in more than 60 countries around the world in 2024, involving around 2 billion voters, or a quarter of the world’s population. Key countries with large populations such as India, Indonesia and Mexico will be invited to the polls this year, as will all EU citizens for the European Parliament elections (in June) and American voters for the US presidential election (in November).
When it comes to the resilience of the democratic process in the face of rapidly developing emerging technologies, for these transatlantic democracies, the stakes are particularly high. Not only for the integrity of their own electoral procedures, the autonomy of voters and trustworthiness of outcomes, but also for their global evangelization of democratic values and the rule of law. As the vast majority of GenAI technology is developed by American companies, and the EU aspires to be the global leader in regulating AI, the credibility of the transatlantic partnership as a joint global bloc of technological democracies is at stake.

Banner outside of the European Parliament in Brussels. Photo: Shutterstock.com / Alexandros Michailidis
Social media and the mis- & disinformation power struggle
Since the first large-scale meddling of social media trolls in a national election by Russian trolls that discredited candidate Hillary Clinton in the 2016 US presidential election won by Donald Trump, misinformation (false information, but not intentional) and disinformation (intentional false information) have impacted elections around the world. Coordinated campaigns to influence voters were reported to have been targeting elections in Germany (national elections 2021), Canada (federal elections, 2021), and France (presidential elections 2022), amongst other nations.
Under increasing pressure of governments to counter the proliferation of mis- and disinformation, particularly across the EU during the Covid-19 pandemic, social media companies including Meta (Facebook, Instagram & WhatsApp), Twitter (now X), TikTok and Alphabet (Google and Youtube) took a variety of measures to shield fake content from their platforms. These efforts include removing accounts spreading false information, limiting users to interact with dubious information and fact-checking video’s and other content.
But over time, not all measures held up, and some were even reversed. Following X’s acquisition by Elon Musk, the platform vastly reduced the number of moderators monitoring the platform, resulting in an increase in hate speech and the fleeing of advertisers. After multiple warnings from the European Commission that X breaches the new EU Digital Services Act (DSA), Commissioner Breton recently announced that he opened formal infringement proceedings against the platform for violating DSA obligations, including the need to counter illegal content and misinformation and to provide sufficient transparency to users.
Digital empires, AI regulation and electoral guardrails
The continued struggle between European lawmakers and American tech companies unfolds in the wider setting of global regulation in the digital economy. In her recent book Digital Empires, Columbia University Professor Anu Bradford outlines a framework with characteristic features of the three main global regions and their stance toward technological governance. They are the free market-driven, hands-off model of the United States, the value-driven regulatory model of the EU, and the state-driven model of China. Though these may clearly look like distinct blocs with defining attributes, Bradford argues that in practice, they are also a patchwork of mutual interests, alliances, deals, and investments that can sometimes be complex and contradictory.
When it comes to regulating (generative) AI, the three technological power blocs take distinctly different approaches. Over the last few years, China introduced a number of sweeping AI rules focused on countering disinformation threatening the Communist party, but also assuring that Large Language Models guarantee “truthful data use and output”. In most recent versions of the legislation, however, the Chinese government somewhat eases enforcement in areas where it would impact the ability of Chinese AI developers to stay globally competitive.
US and EU looking for regulations
In December, key stakeholders in the EU reached a political deal on the AI Act, which is expected to come into force only fully in 2026. The legislation contains measures to address a wide array of risks associated with AI tools and applications, such as through predictive policing or facial recognition technology. Though critiqued by some to be too restrictive and further braking Europe’s ability to become competitive in an industry landscape dominated by the US and China, the rules package is widely seen as the most comprehensive of its kind in the democratic world. When it comes to elections, in case AI is used to influence the outcome of elections and voter behaviour, the AI Act classifies it as high-risk: EU citizens will have a right to launch complaints about AI systems and receive explanations about decisions based on high-risk AI that impact their rights.
In the US, a deeply polarized Congress continues to be unable to pass meaningful federal AI regulation. In that legislative void, the Biden-administration last fall introduced a broad Executive Order (EO) on AI, attempting to put guardrails on the technology. Though companies making the largest AI models will need to share results of safety testing with the government before their public release, the EO leaves most of the industry largely undisturbed: there is no need to register for a license to train large models or to disclose competitive information.
In the realm of electoral disinformation, AI-generated deepfakes such as the Biden robocall and the Republican Party’s dystopian film of what a world with a re-elected president Biden might look like have already played a role in the campaigns. To provide clarity about such deepfake-use, the Biden’s EO directs the US Commerce Department to come up with guidance on watermarking AI-generated content. Still, any substantial control measures on deepfakes are currently only developed on a state-level: to date, California, Texas, Washington, Minnesota and Michigan have passed laws that require any political advertising made with AI to disclose that fact, with other states expected to follow soon.
An AI-generated attack ad against Joe Biden, released by the Republican National Committee.
Beyond deepfakes: the perils of generative manipulation
Notwithstanding the US and EU clearly having vastly different approaches toward AI regulation, millions of citizens in both regions are already active users of the technology. Through social media or traditional news reporting, everyone is to some extent exposed to it. And despite the EU’s pro-active regulatory stance, the most poignant impact of electoral deepfake application to date was not reported in the US, but in the EU. Just before Slovakia’s October 2023 election, deepfake audio of pro-Western candidate Michal Šimečka talking about manipulating the election and doubling the price of beer went viral, and his party later closely lost the election to pro-Russian party. Some commentators claimed that the clip had significant impact on the results. Combined with the proven impact of mis- and disinformation in elections over the last decade, and the fact that GenAI tools are now widely and cheaply available, it is no surprise that these deepfakes-on-steroids are the key concern for election officials everywhere.
But unfortunately, in the 2024 election season, GenAI may have more capabilities in store for actors intent on nudging voters in a desired direction. The current maturity level of chatbots, combined with increased offerings to create bespoke bots on Large Language Models for personal or professional purposes, also enables nefarious users to build chatbots to nudge voters or influential individuals in specific communities. Researchers have pointed to the dynamic that in a closed person-to-bot dialogue, the intimacy of the interaction can allow for recipients of (deepfaked) information to be more easily manipulated. Such “weaponization of relationships” works particularly well if a bot is fed with publicly available data on that person: it allows a conversation to be personalized as much as possible. Bad actors can hence engage voters in what might appear to be an informal conversation, but what in reality is a refined attempt to prod them into (not) voting for a specific party or candidate. Specifically in countries where concentrations of swing voters have outsized influence, as is the case in some US swing states, voter manipulation targeting these populations can have significant consequences for national outcomes.
The good news is that there is no sound proof yet that such bots are being developed or deployed at scale. But the bad news is that GenAI tools are developing extremely quickly: it would be a surprise if none would emerge throughout the 2024 electoral year in some shape or form. Also, it would be particularly difficult to track and mitigate them, seen to the closed virtual environments in which they usually take place, far away from the more easily traceable open social media platforms. What is more, though the EU AI Act classifies AI applications influencing voter behaviour as high risk, even that component of the legislation will not be enforced until after Super Election Year, in 2025 at the earliest. And in the US, it is extremely unlikely that federal legislation banning such manipulation is passed before November.
The ideological divide in governing technology
The potential of refined GenAI-created deepfake content combined with nascent applications such as manipulative bots and conversational AI make for an enormous challenge to electoral integrity in 2024. Though American companies including OpenAI, Microsoft, Google, Meta and Anthropic clearly lead GenAI development globally, Chinese companies are not far behind, and through the recent introduction of its generative AI regulation, the Chinese state’s control on these AI developers is tight. As argued in Digital Empires by Professor Bradford, who identifies China as a so-called techno-autocracy, through initiatives like its Digital Silk Road program China has been successful in exporting its authoritarian state-driven model of technology governance to dozens of countries around the world. Through loans, investments, and dependence on large infrastructural and software-contracts, countries specifically in the Global South are implementing China’s surveillance model and increasingly becoming part of its sphere of autocratic technological influence.
Despite disagreements between the US and Europa on a variety of issues and levels around technology governance, the EU and the US have a much broader, ideological interest in rapprochement and cooperation. To speak with Bradford again, they are both techno-democracies. While trying to protect their own electoral integrity, they also attempt to inspire and aid other nations to rejuvenate democracy. The Summit for Democracy hosted by President Biden in 2023, attended by leaders of dozens of EU countries and democracies from around the world, is a good example of that effort.
For the credibility of such joint evangelization, it is crucial for the 2024 elections in Europe and the US to prove robust to digital manipulation, a theme that featured on the Summit’s agenda. As the reputation of the US electoral system already took a particularly huge blow following the social media-fuelled January 6th, 2021, attack on the US Capitol, beyond existential questions around the state of its domestic democratic fabric, the authority of the US to advocate for rejuvenation of democracy elsewhere is clearly on the line as well. With most state-of-the-art GenAI tools stemming from companies based in the US, that credibility will also depend on the US government’s ability to contain the technology’s influence, as it will be applied and abused in elections in democracies around the world.

US President Joe Biden and Secretary of State Anthony Blinken at the Summit for Democracy, March 29, 2023. Photo: Flickr.com / The White House
Strategies of resilience
Despite the fundamentally different approaches to AI regulation between the US and the EU, protecting electoral integrity is a common goal they share this year. There are a number of efforts underway to cooperate and coordinate internationally to improve comprehensive AI oversight. But until date, as argued by Marietje Schaake, nation-states are nowhere near a binding treaty that can offer real protection and control corporates.
In that void, governments are effectively (and unfortunately) dependent on the volition of self-governance by private actors like OpenAI, the company that provides ChatGPT. Last week, OpenAI announced a number of measures to counter election influencing in 2024, including banning the use of its technology for political ads and lobbying, prohibiting building chatbots mimicking candidates or government, and allowing users to report illicit use. In the absence of (enforced) regulation, AI platforms themselves taking such measures is better than nothing, and governments should press them to step up their efforts throughout 2024 as much as possible.
Detecting, mitigating, and countering AI-enabled manipulation of voters and electoral proceedings is another area in which transatlantic governments must invest more resources. International cooperation to collect evidence and share trends, best and worst practices and effective remedies will be indispensable. Approaches to cope with disinformation campaigns created by generative AI by using generative AI technology to identify the trends, as appears to be an early lesson learned during Taiwan’s 2024 election, merit further exploration.
Lastly, governments must also step up their efforts to increase public awareness and education. Educating voters on recognizing AI-generated misinformation, disinformation, deepfakes, and nudging through chatbots and conversational AI is essential. Broad communication campaigns must be launched as soon as possible to inform electorates about technological influencing and teach voters to recognize such manipulation.
The imperative of techno-democracies
In a world where strong AI tools have become so easily accessible, the stakes for democracy in 2024 are unprecedented. The ability of the US and EU to show unison in keeping AI companies in check will be essential for their global reputation as bearers of free and open elections, but also for their intention to lead in regulating the technology, specifically in the EU. Fortunately, the belief in human-centric technology regulation is no longer confined to Europe: a large majority of Americans now prefer strong AI regulation, so the US government has a strong mandate to be much more pro-active in putting guardrails on the technology.
Only together will the transatlantic partnership be able to make strides in the global battle against techno-autocracy China and the countries that follow its example. They must show that, as democratic countries, they can keep their own companies in check, as China has already successfully done. It raises the long-term question to what extent democracies, like autocracies, will be able to gain control over the digital economy, but then embedded within open, democratic societies.
Header photo: Shutterstock.com / Andrey_Popov

Jochem de Groot is the founder of strategic AI consultancy Humanative, and a member of the Peace & Security Committee of the Dutch Advisory Council of International Affairs (AIV). He formerly worked at Philips and Microsoft.