two hands reaching for a flying object in the sky

Chatbots presenting multiple viewpoints tend to be trusted more by conspiracy believers

10 min read 2,095 words

A new study finds that conspiracy believers trust balanced AI chatbots more than anyone else does. The uncomfortable question is whether that trust is earned or exploited.

“A chatbot presenting multiple perspectives feels refreshingly balanced to people across the board — including those who distrust mainstream media.”

Shreya Dubey, University of Amsterdam

Most efforts to reach people who believe in conspiracy theories hit the same wall. Offer them mainstream news and they dismiss it as agenda-driven. Fact-check their claims and they dig in deeper. The problem is not just what information is presented it is who is doing the presenting.

A new study published in Computers in Human Behavior suggests there might be a way around that wall, and it comes in the form of an AI chatbot. Researchers at the University of Amsterdam found that when a chatbot presented both mainstream and alternative perspectives on climate change side by side treating neither as more authoritative than the other people with strong conspiracy beliefs trusted the tool significantly more than people without those beliefs did.

The finding is striking. It is also, the researchers are quick to note, deeply complicated.

To test the idea, lead researcher Shreya Dubey and colleagues built a custom chatbot they called Infobot. The program presented users with eight news headlines about climate change — four drawn from mainstream scientific reporting and four from alternative sources, including articles that framed climate change as a political hoax or attacked climate policy.

Users could click any headline to read a brief AI-generated summary, after which that headline disappeared and they were prompted to choose another. Behind the scenes, the software quietly recorded two things: which articles people chose, and how long they actually spent reading each one.

Sample Size 177 Active participants recorded in Study 1 phase
Headlines 4 + 4 Mainstream vs. alternative headlines analyzed
Validation 2 Independent studies conducted for verification

After interacting with Infobot, participants filled out a survey rating the chatbot on ease of use, how useful they found it, how much they trusted it, and how likely they were to use something like it again. They had also been sorted in advance based on their responses to a questionnaire about conspiracy beliefs into groups with either high or low conspiracy thinking.

The results were the opposite of what many would expect.

Across the board, both groups responded positively to Infobot. People generally found it useful and said they would use something like it in the future. But when the researchers compared the two groups directly, a clear and consistent gap emerged: the people with high conspiracy beliefs trusted the chatbot more, liked it more, and expressed stronger intentions to use it in future.

“Most of us, regardless of our beliefs, tend to think we’ve formed our opinions objectively and from good information. Our findings suggest that a chatbot presenting multiple perspectives feels refreshingly balanced to people across the board.”

Shreya Dubey, postdoctoral researcher, University of Amsterdam

The researchers ran a second study with 58 participants to nail down the effect more precisely. This time, rather than measuring general conspiracy beliefs, they screened specifically for beliefs about climate change. They also added an attention check participants had to enter a code displayed within the chatbot to prove they had actually engaged with it. The results held. High-belief participants again reported greater trust and were more enthusiastic about the tool.

The likely explanation, the team suggests, has to do with who the information is coming from. Research in media psychology has shown that people often apply a “machine heuristic” an assumption that automated, computer-generated sources are more neutral and objective than human journalists, who might have political agendas. For people who already distrust the mainstream media, a chatbot may feel like a rare source that has no side to take. And when that chatbot also chooses to surface the alternative views they have always believed were being suppressed, it reads as confirmation that the tool is genuinely fair.

The behavioural tracking data revealed something the self-reports alone could not: what people said they thought of the chatbot and what they actually did with it were not entirely aligned.

While both groups selected a broadly similar mix of articles, the high-conspiracy group spent significantly less time reading the mainstream summaries compared to the low-conspiracy group. The alternative articles? They lingered on those considerably longer.

Psychometric Contrast Analysis

User Interaction Patterns

Low Conspiracy Believers

Trusted chatbot at baseline level without skewed deviation.
Read mainstream & alternative summaries at a similar pace and depth.
Exhibited moderate enthusiasm for future adoption of the tool.
Less likely to perceive the AI tool as being “uniquely fair” compared to humans.

High Conspiracy Believers

Recorded significantly higher trust scores in the AI output.
Spent far less time scrutinizing mainstream summaries during the trial.
Engaged with alternative content more thoroughly and for longer durations.
Expressed a stronger intention to utilize the chatbot for information gathering again.

The takeaway from the tracking data is sobering: engaging with a balanced chatbot did not appear to push conspiracy-minded participants toward mainstream views. If anything, the balanced format may have functioned more as permission to seek out and dwell on the content they already wanted to read, framed by a reassuring sense that the tool itself was neutral.

This is where the study’s most important and most uncomfortable contribution lies. The design feature that made Infobot so appealing to conspiracy believers was precisely its 50-50 split between mainstream science and alternative viewpoints. But on a topic like climate change, that split does not reflect reality.

⚠ The false balance problem: Over 97% of actively publishing climate scientists agree that human-caused global warming is real and ongoing. Presenting peer-reviewed climate science and conspiracy-driven climate denial as equivalently credible options does not represent a balanced information environment it misrepresents the state of expert knowledge.

“This raises an uncomfortable question: is balance always desirable? Climate change is not genuinely contested among scientists, yet our chatbot presented mainstream and alternative views side by side. While this approach made the tool widely accepted, it also risks creating a false equivalence — giving fringe or misleading viewpoints the same weight as scientific consensus.”

Dubey put it plainly

The implication is not that balanced chatbots are inherently harmful. On questions of genuine political or ethical disagreement — say, the best policy response to climate change — presenting multiple viewpoints is exactly what responsible journalism demands. The problem arises specifically when the same logic is applied to empirical questions where scientific consensus already exists. In those cases, “balance” can do more epistemological damage than bias.

Despite the ethical complications, the researchers argue the core insight remains valuable: AI tools that feel neutral and evenhanded can open conversations with people who have written off traditional media entirely. That is not nothing. But unlocking that potential without inadvertently amplifying misinformation will require careful, context-sensitive design.

Future tools might offer a graduated approach balancing genuinely contested policy debates while clearly flagging where scientific consensus is near-universal. Giving users control over the ratio of mainstream to alternative content, or incorporating visible credibility signals for different sources, could preserve the feeling of fairness that these tools offer without flattening the distinction between expert consensus and fringe claims.

There is also an open question the current research cannot answer: does repeated exposure to a balanced chatbot eventually shift beliefs, or does it simply entrench them? A single session in a controlled study is a thin slice of the information habits that shape long-term worldviews. Follow-up research will need to examine what happens to trust and to beliefs about climate change over weeks and months of use.

Technical Brief

Research Scope & Limitations

01
Both studies utilized an extreme-groups design, focusing on participants at opposite ends of the conspiracy belief spectrum. The majority of the population—those in the “middle”—were not the focus of the primary analysis.
02
Interaction was limited to a single session within a survey context. It remains unclear if these positive attitudes would persist during repeated, daily use within a standard media consumption diet.
03
Measurements focused on attitudes and intentions rather than actual belief change. Establishing trust in a chatbot interface is distinct from formally updating one’s scientific or political worldviews.
04
The target topic, climate change, is uniquely politically charged and media-saturated. These results may not generalize to conspiracy beliefs regarding public health, elections, or local socio-political issues.

Still, as a proof-of-concept that automated tools can reach audiences that human journalists often cannot, the study offers something genuinely new. Whether that reach translates into a healthier information environment or simply a more comfortable one for people who were already certain they were right depends entirely on what those tools choose to say.

Reference

Dubey, S., Ketelaar, P. E., Dingler, T., Peetz, H. K., & van Schie, H. T. (2026). Investigating perceived trust and utility of balanced news chatbots among individuals with varying conspiracy beliefs. Computers in Human Behaviorhttps://doi.org/10.1016/j.chb.2026.108920