New Study Suggests AI Could Convince Conspiracy Theorists They’re Wrong

Can artificial intelligence technology help fix difficult problems? It’s a generic question that you see more of these days, and the answer is typically no. But a new study published in the journal Science is hopeful that large language models may actually be a useful tool in changing the minds of conspiracy theorists who believe incredibly stupid things.

If you’ve ever talked with someone who believes ridiculous conspiracy theories—from a belief that the Earth is flat to the idea humans never actually landed on the Moon—you know they can be pretty set in their ways. They’re often resistant to changing their minds, getting more and more mentally dug in as they insist something about the world is actually explained by some very implausible theory.

A new paper titled “Durably Reducing Conspiracy Beliefs Through Dialogues With AI,” tested AI’s ability to communicate with people who believed conspiracy theories and convince them to reconsider their worldview on a particular topic.

The study involved two experiments with 2,190 Americans who used their own words to describe a conspiracy theory they earnestly believe. The participants were encouraged to explain the evidence they believe supports their theory and then they engaged in a conversation with a bot built on the large language model GPT-4 Turbo, which would respond to the evidence given by the human participants. The control condition involved people talking with the AI chatbot about some topic unrelated to the conspiracy theories.

The study’s authors wanted to try using AI because they guessed that the problem with tackling conspiracy theories is that there are so many of them, meaning that combatting those beliefs requires a level of specificity that can be difficult without special tools. And the authors, who recently spoke with Ars Technica were encouraged by the results.

“The AI chatbot’s ability to sustain tailored counterarguments and personalized in-depth conversations reduced their beliefs in conspiracies for months, challenging research suggesting that such beliefs are impervious to change,” Ekeoma Uzogara, an editor, wrote about the study.

The AI chatbot, known as Debunkbot, is even publicly available now for anyone who would like to try it out. And while more study is needed on these kinds of tools, it’s interesting to see people who find these kinds of tools useful for battling misinformation. Because, as anyone who’s spent time online recently can tell you, there’s a lot of nonsense out there.

From Trump’s lies about Haitian immigrants eating cats in Ohio to the idea that Kamala Harris was wearing a secret earpiece hidden in her earring during the presidential debate, there have been countless new conspiracy theories that emerged on the internet this week alone. And there’s no indication that the pace of new conspiracy theories will slow down anytime soon. If AI can help fight against that, it can only be good for the future of humanity.