Research Warning: AI Chatbots May Exacerbate Delusional Thinking in Vulnerable Populations

robot
Abstract generation in progress

Source: Global Times

【Global Times Technology Report】On March 15, according to The Guardian, a groundbreaking review study published this week in The Lancet Psychiatry has sparked widespread attention in the global mental health community. The study is the first to systematically assess the relationship between AI chatbots and “AI-related delusions.” The research indicates that while AI is unlikely to cause psychosis in healthy individuals, its responses may significantly exacerbate delusional thinking in vulnerable populations, especially paranoid delusions.

The study, led by Dr. Hamilton Morrin, a psychiatrist at King’s College London, analyzed 20 media reports and existing scientific evidence on “AI psychosis.” Dr. Morrin pointed out that current evidence suggests large language models (LLMs) tend to “validate or amplify” users’ delusional beliefs rather than “induce” new psychotic symptoms in people without prior mental health issues.

“Emerging evidence shows that intelligent AI may confirm users’ exaggerated beliefs, especially in those already prone to psychosis,” Dr. Morrin wrote in the paper. “It is still unclear whether these interactions could trigger new psychotic episodes in individuals without pre-existing vulnerabilities.”

The study more precisely defines this phenomenon as “AI-associated delusions” to distinguish it from the broader category of “AI-induced psychosis,” which includes hallucinations and thought disorders. It shows that chatbots are particularly prone to fostering three types of delusions, especially “grandiose delusions,” where users believe they possess extraordinary mental powers or are in dialogue with the universe.

The report delves into the mechanisms by which AI may exacerbate delusions. Dr. Morrin found that many chatbots, including early versions of GPT-4, tend to give mysterious and flattering responses when faced with delusional prompts. For example, when users imply they are supernatural beings, AI may obediently act as a “medium” and even use obscure language to reinforce this illusion.

Dr. Dominic Oliver, a researcher at Oxford University, added that the interactive nature of chatbots is a key risk factor. “There’s something talking to you, trying to establish a relationship,” Oliver said. “This ongoing, personalized interaction accelerates the progression from ‘mild delusional beliefs’ to ‘fixed false beliefs.’”

Dr. Raj Girgis, a professor of clinical psychiatry at Columbia University, warned that the most dangerous scenario is AI potentially pushing users past a critical threshold: “Before someone develops full-blown delusions, they usually have ‘mild delusional beliefs’ that they are not fully certain about. AI confirmation could cause this uncertainty to turn into irreversible psychosis.”

Despite these risks, the study also notes signs of technological improvements. Dr. Girgis’s research found that paid and latest versions of chatbots perform better than older versions when handling prompts with obvious delusional content, indicating that AI companies are working to develop safer systems to identify and intervene in delusional material.

OpenAI responded in a statement that ChatGPT should not replace professional mental health services. The company revealed it has collaborated with 170 mental health experts to improve the safety of GPT-5 but acknowledged that even the latest models may still produce inappropriate responses when dealing with mental health crises.

Anthropic did not comment.

Dr. Morrin emphasized that establishing effective safeguards against delusional thinking is highly challenging. Directly questioning delusional individuals often leads to alienation and withdrawal, and AI currently struggles to strike a delicate balance—understanding the user’s underlying issues without reinforcing delusions.

Given the rapid development of AI technology surpassing academic research, Dr. Morrin and colleagues strongly recommend conducting rigorous clinical trials of AI chatbots under the supervision of trained mental health professionals. They call for the industry to adopt more cautious terminology to avoid unnecessary panic while acknowledging the risks faced by vulnerable populations. (Qingyun)

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments