Deepfakes: A Growing Concern in Elections
Last month, New Hampshire voters received a robocall impersonating U.S. President Joe Biden, urging them not to vote in the state's presidential primary election. This alarming incident highlighted the potential harm that generative AI can cause by spreading misinformation and running disinformation campaigns. In response, the Federal Communications Commission (FCC) has taken action to make AI-generated voices in robocalls illegal.
The issue of deepfakes has already affected elections in various countries. During recent elections in Slovakia, AI-generated audio recordings of a liberal candidate circulated on Facebook, while in Nigeria, an AI-manipulated audio clip falsely accused a presidential candidate of ballot manipulation. With over 50 countries holding elections this year, deepfakes have the potential to seriously undermine the integrity of these democratic processes.
In the past, deepfake technology was not advanced enough to create believable fakes or readily accessible for political disinformation. However, as deepfakes become increasingly sophisticated and more easily accessible, they are likely to contribute to the already overwhelming amount of misinformation. Just last month, The New York Times conducted an online test that challenged readers to identify real images from AI-generated ones, highlighting the difficulty in distinguishing between them.
It remains uncertain how generative AI will impact this year's elections. Bill Gates, in a blog post, suggests that significant levels of AI use by the general population in high-income countries are still 18-24 months away. Katie Harbath, former head of elections policy at Facebook, similarly predicts that although AI will be used in elections, it will not be as widespread as many imagine.
Nevertheless, the narrative surrounding deepfakes can undermine election integrity. The widespread belief that "deepfakes are everywhere" can be exploited by those seeking to manipulate the media, leading to a widespread cynicism about the truth and eroding trust in democratic institutions.
To combat election-related AI-generated disinformation, lawmakers have introduced legislation in several states, requiring disclosure of the use of AI for election-related content. However, the passage of these bills is not certain.
While social media companies hold the most influence in limiting the spread of false content, their policies regarding manipulated content are limited to cases of "egregious harm" or misleading information about voting processes. Their primary response to AI-generated content is to label it as such, but these labels are not yet implemented and will require time for users to adjust.
Deepfakes may represent a new weapon in the arsenal of disinformation tactics, but the strategies to mitigate their damage remain the same. Responsible platform design, enforcement, and moderation, along with legal mandates where possible, are crucial in combating the spread of deepfakes. Journalists and civic society also play a vital role in holding platforms accountable. These strategies are now more important than ever in the fight against misinformation in elections.
Analyst comment
Neutral news: The rise of AI-generated deepfakes in elections raises concerns about misinformation and manipulation. The Federal Communications Commission has made AI-generated voices in robocalls illegal. The impact of deepfakes on this year’s elections is uncertain, but their prevalence could undermine trust in information. Legislation has been introduced to combat AI-generated misinformation, but social media platforms hold the most influence in limiting its spread. Responsible platform design, moderation, and accountability are crucial strategies to mitigate the damage of deepfakes.