Artificial intelligence’s role in crafting deceptive content in politics has recently come to light with a robocall impersonating President Joe Biden, urging New Hampshire residents not to vote. This incident is believed to be an unlawful act of voter suppression in New Hampshire’s Democratic presidential primary. Experts in disinformation and AI technology suggest the call was a deepfake, a term for AI-generated fake audio or video designed to imitate real individuals, often without their consent. The voice, while resembling Biden’s, exhibited an unnatural cadence, as heard in the audio obtained by NBC News.
Identifying the specific AI program responsible for this deepfake is challenging. With the proliferation of easily accessible apps and online services, creating a reasonably accurate voice imitation has become straightforward and inexpensive. Such technology requires minimal audio samples to replicate a person’s voice, simplifying the process of mimicking public figures like politicians.
This incident is not isolated in its use of AI for political misinformation. The AI Insider has highlighted similar instances worldwide, such as the creation of an AI-generated counterpart of Russian President Vladimir Putin and the use of AI voice mimicry intensifying tensions in Sudan’s civil unrest. These examples underscore the growing concern over AI’s misuse in the political arena.
Despite the apparent criminality of efforts to prevent voting or voter registration, current regulations provide little governance over the deceptive use of AI. Activists have urged the Federal Election Committee to address deepfake advertisements, but as of now, no rulemaking process concerning this technology has been initiated, according to a spokesperson from the committee. The gap in regulation and the increasing sophistication of AI in creating convincing forgeries pose significant challenges for maintaining the integrity of political processes and elections.