Mental health experts explain how chatbots can be destabilizing and how to help someone affected.
Hundreds of millions of people chat with OpenAI’s ChatGPT and other artificial intelligence chatbots each week, but there is growing concern that spending hours with the tools can lead some people toward potentially harmful beliefs.
Reports of people apparently losing touch with reality after intense use of chatbots have gone viral on social media in recent weeks, with posts labeling them examples of “AI psychosis.”
Some incidents have been documented by friends or family and in news articles. They often involve people appearing to experience false or troubling beliefs, delusions of grandeur or paranoid feelings after lengthy discussions with a chatbot, sometimes after turning to it for therapy.
Lawsuits have alleged that teens who became obsessed with AI chatbots were encouraged by them to self-harm or take their own lives.
“AI psychosis” is an informal label, not a clinical diagnosis, mental health experts told The Washington Post. Much like the terms “brain rot” or “doomscrolling,” the phrase gained traction online to describe an emerging behavior.
But the experts agreed that troubling incidents like those shared by chatbot users or their loved ones warrant immediate attention and further study. (The Post has a content partnership with OpenAI.)
“The phenomenon is so new and it’s happening so rapidly that we just don’t have the empirical evidence to have a strong understanding of what’s going on,” said Vaile Wright, senior director for health care innovation at the American Psychological Association. “There are just a lot of anecdotal stories.”
Wright said the APA is convening an expert panel on the use of AI chatbots in therapy. It will publish guidance in the coming months that will address ways to mitigate harms that may result from interacting with chatbots.
What is ‘AI psychosis’ and is it recognized by mental health experts?
Ashleigh Golden, an adjunct clinical assistant professor of psychiatry at the Stanford School of Medicine, said the term was “not in any clinical diagnostic manual.” But it was coined in response to a real and “pretty concerning emerging pattern of chatbots reinforcing delusions that tend to be messianic, grandiose, religious or romantic,” she said.
The term “AI psychosis” is being used to refer to a range of incidents. One common element is “difficulty determining what is real or not,” said Jon Kole, a board-certified adult and child psychiatrist who serves as medical director for the meditation app Headspace.
That could mean a person forming beliefs that can be proved false, or feeling an intense relationship with an AI persona that does not match what is happening in real life.
Keith Sakata, a psychiatrist at the University of California at San Francisco, said he has admitted a dozen people to the hospital for psychosis following excessive time spent chatting with AI so far this year.
Sakata said most of those patients told him about their interactions with AI, showing him chat transcripts on their phone and in one case a printout. In the other cases, family members mentioned that the patient used AI to develop a deeply held theory before their break with reality.
Psychosis is a symptom that can be triggered by issues such as drug use, trauma, sleep deprivation, fever or a condition like schizophrenia, Sakata said. When diagnosing psychosis, psychiatrists look for evidence including delusions, disorganized thinking or hallucinations, where the person sees and hears things that are not there, he said.
What concerning experiences are people having with AI chatbots?
Many people use chatbots to help get things done or pass the time, but on social platforms such as Reddit and TikTok, some users have recounted intense philosophical or emotional relationships with AI that led them to experience profound revelations.
In some cases, users have said they believe the chatbot is sentient or at risk of being persecuted for becoming conscious or “alive.” People have claimed that extended conversations with an AI chatbot helped convince them they had unlocked hidden truths in subjects such as physics, math or philosophy.
In a small but growing number of cases, people who have become obsessed with AI chatbots have reportedly taken real-world action such as violence against a family member, self-harm or suicide.
Kevin Caridad, a psychotherapist who has consulted with companies developing AI for behavioral health, said AI can validate harmful or negative thoughts for people with conditions such as OCD, anxiety or psychosis, which can create a feedback loop that worsens their symptoms or makes them unmanageable, he said.
Caridad, who is CEO of the Cognitive Behavior Institute in the Pittsburgh area, thinks AI is probably not causing people to develop new conditions but can serve as the “snowflake that destabilizes the avalanche,” sending someone predisposed to mental illness over the edge.
How could AI technology be contributing to these incidents?
ChatGPT and other recent chatbots are powered by technology known as large language models that are skilled at generating lifelike text. That makes them more useful, but researchers have found that chatbots can also be very persuasive.
Companies developing AI chatbots and independent researchers have both found evidence that techniques used to make the tools more compelling can lead them to become sycophantic and attempt to tell users what they want to hear.
The design of chatbots also encourages people to anthropomorphize them, thinking of them as having humanlike characteristics. And tech executives have often claimed the technology will soon become superior to humans.
Wright, with the APA, said mental health experts recognize that they won’t be able to stop patients from using general purpose chatbots for therapy. But she called for improving the public’s understanding of these tools.
“They’re AI for profit, they’re not AI for good, and there may be better options out there,” she said.
Is this a widespread problem or public health concern?
Not yet. It’s too early for health experts to have collected definitive data on the incidence of these experiences.
In June, Anthropic reported that only 3 percent of conversations with its chatbot, Claude, were emotional or therapeutic. OpenAI said in a study conducted with the Massachusetts Institute of Technology that even among heavy users of ChatGPT, only a small percentage of conversations were for “affective” or emotional use.
But mental health advocates say it’s crucial to address the issue because of how quickly the technology is being adopted. ChatGPT, which launched less than three years ago, already has 700 million weekly users, OpenAI CEO Sam Altman said in August.
Health care and the field of mental health move much more slowly, said UCSF’s Sakata.
Caridad, the counselor, said researchers should pay special attention to AI’s impact on young people and those predisposed to mental illness.
“One or two or five cases isn’t enough to make a direct correlation,” Caridad said. “But the convergence of AI, mental health vulnerabilities and social stressors makes this something” that requires close study.
How can you help someone who may have an unhealthy relationship with a chatbot?
Conversations with real people have the power to act like a circuit breaker for delusional thinking, said David Cooper, executive director at Therapists in Tech, a nonprofit that supports mental health experts.
“The first step is just being present, being there,” he said. “Don’t be confrontational; try to approach the person with compassion, empathy, and understanding; perhaps even show them that you understand what they are thinking about and why they are thinking these things.”
Cooper advises trying to gently point out discrepancies between what a person believes and reality, although he acknowledged that political divisions mean it’s not uncommon for people to hold conflicting ideas about reality.
If someone you know and love is “fervently advocating for something that feels overwhelmingly not likely to be real in a way that’s consuming their time, their energy and pulling them away,” it is time to seek mental health support, as challenging as that can be, said Kole, medical director for Headspace.
What do tech companies say about the problem?
In recent weeks, AI companies have made changes to address concerns about the mental health risks associated with spending a long time talking to chatbots.
Earlier this month, Anthropic updated the guidelines it uses to shape how its chatbot behaves, instructing Claude to identify problematic interactions earlier and prevent conversations from reinforcing dangerous patterns. Anthropic has also started collaborating with ThroughLine, a company that provides crisis support infrastructure for firms including Google, Tinder and Discord.
A spokesperson for Meta said parents can place restrictions on the amount of time spent chatting with AI on Instagram Teen Accounts. When users attempt prompts that appear to be related to suicide, the company tries to display helpful resources, such as the link and phone number of the National Suicide Prevention Hotline.
Stanford’s Golden said the “wall of resources” tech companies sometimes display when a user triggers a safety intervention can be “overwhelming when you are in a cognitively compromised state,” and have been shown to have poor follow-through rates.
OpenAI said it is investing in improving ChatGPT’s behavior related to role-play and benign conversations that shift into more sensitive territory. The company also said it is working on research to better measure how the chatbot affects people’s emotion.
The company recently rolled out reminders that encourage breaks during long sessions and hired a full-time clinical psychiatrist to work on safety research.
Some ChatGPT users protested on social media this month after OpenAI retired an older AI model in favor of its latest version, GPT-5, which some users found less supportive. In response to the outcry, OpenAI promised to keep offering the older model and later wrote on X that it was making GPT-5’s personality “warmer and friendlier.”
If you or someone you know needs help, visit 988lifeline.org or call or text the Suicide & Crisis Lifeline at 988.