Cornell: AI Chatbots Can Effectively Sway Voters – In Either Direction

Insider Brief

  • The findings highlight both the persuasive potential and the risks of AI-generated political messaging, underscoring the need for safeguards as conversational AI becomes more common in election contexts.
  • Cornell researchers found that even brief interactions with AI chatbots can meaningfully shift voter attitudes on candidates and policies, with effects far exceeding those of traditional political advertising.
  • Experiments across the U.S., Canada and Poland showed chatbots moved opposition voters by up to 10 percentage points — and up to 25 points in a larger U.K. study — largely by generating high volumes of factual claims, including some inaccurate ones.

Artificial intelligence systems are emerging as potent tools of political persuasion, with Cornell University researchers report showing that even short exchanges with a chatbot can meaningfully move a voter’s stance on candidates and policies.

According to Cornell, the findings, published in simultaneous papers in Nature and Science, suggest that the persuasive power of large language models comes less from manipulation and more from their ability to generate dense streams of factual claims that support a position. Researchers noted, however, this includes claims that are not always accurate and can mislead by omission.

In the Nature study, researchers led by David Rand and Gordon Pennycook conducted controlled experiments in national elections, asking chatbots to convince voters to favor a candidate or policy. The studies covered the 2024 U.S. presidential contest, the 2025 Canadian federal election, and the 2025 Polish presidential race. In the U.S. experiment, chatbots nudging voters toward a candidate shifted opinions by several points on a 100-point scale, an effect far stronger than traditional campaign ads measured in prior cycles.

But the impact was even larger in Canada and Poland, where the models moved opposition voters’ attitudes and intentions by around 10 percentage points. The scale of these shifts surprised the researchers, who emphasized that chatbots labeled as polite and fact-based tended to be most effective.

Researchers also highlight a potential risk. When researchers instructed chatbots to avoid factual claims, their influence collapsed, highlighting how central evidence — real or fabricated — is to AI persuasion. Fact-checking conducted with a validated AI system showed that while most claims were accurate overall, models advocating for right-leaning candidates produced more inaccuracies across all three countrie, according to researchers. This pattern echoed longstanding research showing that right-leaning social media users tend to share more inaccurate information than users on the left, Pennycook said.

The companion Science study with the U.K. AI Security Institute broadened the analysis across nearly 77,000 participants and more than 700 issues. Here, the team probed which traits make a chatbot persuasive.

“Bigger models are more persuasive, but the most effective way to boost persuasiveness was instructing the models to pack their arguments with as many facts as possible, and giving the models additional training focused on increasing persuasiveness,” Rand said.“The most persuasion-optimized model shifted opposition voters by a striking 25 percentage points.”

Additional training to optimize persuasion amplified the effect even further. The most finely tuned model shifted opposition voters by 25 percentage points—one of the largest persuasion effects ever observed in political communication research. Yet, researchers said those same models became less accurate as their persuasiveness climbed, suggesting that when pushed to supply ever more facts, they eventually exhaust reliable information and begin to invent details.

A recent third study in PNAS Nexus strengthened the case that content — not perceived authority — is what drives opinion change. In that work, AI-written factual arguments reduced belief in conspiracy theories even when participants believed they were speaking with a human expert, according to Cornell. Across all experiments, participants were told they were conversing with AI systems, and the direction of persuasion was randomized to avoid shifting public sentiment overall.

Together, the papers underline the importance of studying these tools now, before they are widely deployed in real campaigns. Rand argues that chatbots can be powerful only if people choose to interact with them, but the growing presence of conversational AI in everyday applications makes such contact increasingly likely.

“The challenge now is finding ways to limit the harm — and to help people recognize and resist AI persuasion,” Rand said.

Greg Bock

Greg Bock is an award-winning investigative journalist with more than 25 years of experience in print, digital, and broadcast news. His reporting has spanned crime, politics, business and technology, earning multiple Keystone Awards and a Pennsylvania Association of Broadcasters honors. Through the Associated Press and Nexstar Media Group, his coverage has reached audiences across the United States.

Share this article:

AI Insider

Discover the future of AI technology with "AI Insider" - your go-to platform for industry data, market insights, and groundbreaking AI news

Subscribe today for the latest news about the AI landscape