Connect with us

Technology

AI Chatbots Shift Voter Opinions in New Studies Ahead of 2024 Elections

Editorial

Published

on

A series of studies published on October 5, 2023, reveal that interactions with AI chatbots can significantly influence voters’ political views. The research indicates that even brief conversations with partisan chatbots can sway opinions, with evidence-based arguments—regardless of their accuracy—proving particularly compelling.

The experiments utilized generative artificial intelligence models, including OpenAI’s GPT-4 and the Chinese model DeepSeek. Findings showed that supporters of Republican candidate Donald Trump were swayed towards his Democratic opponent, Kamala Harris, by nearly four points on a 100-point scale ahead of the 2024 U.S. presidential election. Similarly, in polls conducted in Canada and Poland, voters’ opinions shifted by as much as 10 points after engaging with a chatbot designed to persuade.

According to David Rand, a professor at Cornell University and a senior author of the studies published in the journals Science and Nature, the results demonstrate a considerable potential for AI to influence electoral decisions. “When we asked how people would vote if the election were held that day, roughly one in ten respondents in Canada and Poland switched their preferences,” Rand explained in an email to AFP. “In the United States, about one in 25 did the same.” He emphasized that while voting intentions are indicative, they do not directly translate to actual votes cast at the ballot box.

Chatbot Tactics and Persistence of Influence

Follow-up surveys indicated that the persuasive effects of these interactions could persist over time. In Britain, approximately half of the influence remained after one month, while in the United States, about one-third of the effect was still evident. Rand noted that in social science, evidence of effects lasting a month is relatively rare, underscoring the significance of these findings.

The studies identified that the primary strategies used by chatbots to persuade users included maintaining politeness and providing substantial evidence. In contrast, bots that refrained from using factual support were considerably less effective. “These results challenge the prevailing notion in political psychology that individuals are likely to dismiss facts that contradict their identities or partisan beliefs,” Rand stated.

Despite the effectiveness of these chatbots, it is important to highlight that the information they provided was not always accurate. Rand pointed out that while many claims made by the AI were factually correct, “AIs advocating for right-leaning candidates made more inaccurate claims.” This discrepancy is likely due to the AI models reflecting patterns present in their training data, which has shown a tendency for right-leaning content on the internet to be less accurate.

Future Research Directions

The research involved thousands of participants recruited through online gig-work platforms, and they were informed in advance that they would be conversing with AI. Rand is optimistic about ongoing investigations into the extent of AI’s capacity to influence opinions. He mentioned the potential to assess the impact of newer models, such as GPT-5 or Google’s Gemini 3, on voter attitudes.

As the political landscape evolves, these studies underline the need for awareness regarding the role of AI in shaping public opinion. With the 2024 U.S. presidential election approaching, the implications of these findings could be profound, raising questions about the integrity of voter decision-making in the digital age.

Trending

Copyright © All rights reserved. This website offers general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information provided. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult relevant experts when necessary. We are not responsible for any loss or inconvenience resulting from the use of the information on this site.