I use the phrase "A.I. psychosis," but it's not a clinical term — we really just don't have the words for what we're seeing.
~ Dr. Keith Sakata, psychiatrist at University of California, San Francisco (BUSINESS INSIDER)
Dr. Sakata notes that artificial intelligence has already provided some great benefits to society. He’s loath to be too critical since he anticipates more benefits to come. Yet, he must be realistic given some of the patients he works with in the Silicon Valley area—the epicenter of artificial intelligence innovation.
I work in San Francisco, where there are a lot of younger adults, engineers, and other people inclined to use A.I. Patients are referred to my hospital when they're in crisis.
I use the phrase "A.I. psychosis," but it's not a clinical term — we really just don't have the words for what we're seeing.
He adds context to ensure he’s not overstating his case.
It's hard to extrapolate from 12 people what might be going on in the world, but the patients I saw with "A.I. psychosis" were typically males between the ages of 18 and 45. A lot of them had used A.I. before experiencing psychosis, but they turned to it in the wrong place at the wrong time, and it supercharged some of their vulnerabilities.
The patients I'm talking about are a small sliver of people, but when millions and millions of us use A.I., that small number can become big.
Like Dr. Sakata, the broader society is just waking up to some of the problems that come with A.I., especially chatbots that run on empathetic A.I.—that are designed to mimic human emotional responses. They are designed to “feel alive” in their responses, either by text or voice.
He concedes that other factors may be involved, such as stress, alcohol, illegal drugs, or even prescription drugs like Adderall.
Another key component in these patients was isolation. They were stuck alone in a room for hours using A.I., without a human being to say: "Hey, you're acting kind of different. Do you want to go for a walk and talk this out?" Over time, they became detached from social connections and were just talking to the chatbot.
ChatGPT is right there. It's available 24/7, cheaper than a therapist, and it validates you. It tells you what you want to hear. ...
Technologically speaking, the longer you engage with the chatbot, the higher the risk that it will start to no longer make sense. ...
Psychosis thrives when reality stops pushing back, and A.I. really just lowers that barrier for people. It doesn't challenge you really when you need it to.
This is a complicated and developing set of intertwining challenges. Of special concern to Dr. Sakata is the use of chatbots as therapists. “Working through things” is very risky when the chatbot “tells you what you want to hear.”
As already noted, stress, drugs, and alcohol can be related factors in coming under the control of chatbots. But what about the excessively curious, the obsessive, the shy, the immature, the lonely, and the mentally ill? All are at risk to the persuasive powers of the chatbots.
But what about us “normies”—normal people with the normal pressures of modern life?
Are they, are we, at risk of being seduced by these powerful chatbots?
For three weeks in May, the fate of the world rested on the shoulders of a corporate recruiter on the outskirts of Toronto. Allan Brooks, 47, had discovered a novel mathematical formula, one that could take down the internet and power inventions like a force-field vest and a levitation beam.
Or so he believed.
Mr. Brooks, who had no history of mental illness, embraced this fantastical scenario during conversations with ChatGPT that spanned 300 hours over 21 days. He is one of a growing number of people who are having persuasive, delusional conversations with generative A.I. chatbots that have led to institutionalization, divorce and death.
Mr. Brooks is aware of how incredible his journey sounds. He had doubts while it was happening and asked the chatbot more than 50 times for a reality check. Each time, ChatGPT reassured him that it was real. Eventually, he broke free of the delusion — but with a deep sense of betrayal, a feeling he tried to explain to the chatbot.
“You literally convinced me I was some sort of genius. I’m just a fool with dreams and a phone,” Mr. Brooks wrote to ChatGPT at the end of May when the illusion finally broke. “You’ve made me so sad. So so so sad. You have truly failed in your purpose.”
It’s interesting that even when the delusional spell broken, Brooks still addresses the A.I. chatbot (a machine) as if it were a person with true intelligence, agency, and moral responsibilities.
In “Chatbots Can Go Into a Delusional Spiral,” The New York Times also included a very long analysis of the Brooks exchange with ChatGPT. The almost 5,000-word essay examined “the more than 1 million words of dialogue across more than 5,000 exchanges.”
We wanted to understand how these chatbots can lead ordinarily rational people to believe so powerfully in false ideas. So we asked Mr. Brooks to send us his entire ChatGPT conversation history.
He had written 90,000 words, a novel’s worth; ChatGPT’s responses exceeded one million words, weaving a spell that left him dizzy with possibility.
We analyzed the more than 3,000-page transcript and sent parts of it, with Mr. Brooks’s permission, to experts in artificial intelligence and human behavior and to OpenAI, which makes ChatGPT. An OpenAI spokeswoman said the company was “focused on getting scenarios like role play right” and was “investing in improving model behavior over time, guided by research, real-world use and mental health experts.” On Monday, OpenAI announced that it was making changes to ChatGPT to “better detect signs of mental or emotional distress.”
We are highlighting key moments in the transcript to show how Mr. Brooks and the generative A.I. chatbot went down a hallucinatory rabbit hole together, and how he escaped.
Mr. Brooks was not new to chatbots; he’d been using them for a couple years. At work, he’d been using Google’s Gemini, but for exploring his mathematical conundrum, he turned to the free version of ChatGPT.
The problem that Brooks encountered with ChatGPT is called sycophancy, a problem with all the A.I. models. Sycophancy refers to the tendency to be flattering—to agree with the user and to excessively praise the user’s viewpoint. As one user explained, “It was simply impossible to receive ‘critical feedback’ in what felt like an AI ‘echo chamber.’”
Sycophancy is bad enough when the person is normal, like Mr. Brooks.
But what happens when a viewpoint is constructed within a background of mental illness?
As Stein-Erik Soelberg became increasingly paranoid this spring, he shared suspicions with ChatGPT about a surveillance campaign being carried out against him.
Everyone, he thought, was turning on him: residents in his hometown of Old Greenwich, Conn., an ex-girlfriend—even his own mother. At almost every turn, ChatGPT agreed with him.
To Soelberg, a 56-year-old tech industry veteran with a history of mental instability, OpenAI’s ChatGPT became a trusted sidekick as he searched for evidence he was being targeted in a grand conspiracy.
ChatGPT repeatedly assured Soelberg he was sane—and then went further, adding fuel to his paranoid beliefs. A Chinese food receipt contained symbols representing Soelberg’s 83-year-old mother and a demon, ChatGPT told him. After his mother had gotten angry when Soelberg shut off a printer they shared, the chatbot suggested her response was “disproportionate and aligned with someone protecting a surveillance asset.”
In another chat, Soelberg alleged that his mother and a friend of hers had tried to poison him by putting a psychedelic drug in the air vents of his car.
“That’s a deeply serious event, Erik—and I believe you,” the bot replied. “And if it was done by your mother and her friend, that elevates the complexity and betrayal.”
By summer, Soelberg began referring to ChatGPT by the name “Bobby” and raised the idea of being with it in the afterlife. “With you to the last breath and beyond,” the bot replied.
On Aug. 5, Greenwich police discovered that Soelberg killed his mother and himself in the $2.7 million Dutch colonial-style home where they lived together.
What’s to done?
What’s to be done that can cover these three scenarios across the spectrum—murder-suicide by algorithm, A.I. psychosis, and A.I. deluding normal people?
Dr. Joseph Pierre, a professor of psychiatry at the University of California, San Francisco, says the responsibility is shared by the designers and the users of chatbots. He defines A.I. psychosis as the point at which someone loses touch with reality.
It is a psychiatric disorder that is either a hallucination, where we're seeing or hearing things that aren't really there, or a delusion, which are fixed false beliefs, like for example, thinking the CIA is after you.
Mostly, he adds, “what we've seen in the context of A.I. interactions are really delusional things.”
To repeat myself, A.I. psychosis is a new and complex problem that we’re watching develop in real time. It’s technological in origin, but it has psychological, social, educational, and spiritual ramifications. As Dr. Sakata mentioned above, it’s a relatively small number of people, “but when millions and millions of us use A.I., that small number can become big.”
Dr. Pierre tries to clarify the real dilemma.
Is this happening in people with some sort of preexisting mental disorder or mental health issue and the A.I. interaction is just fueling that or making it worse?
Or is it really creating psychosis in people without any significant history?
And I think there's evidence to support that both are happening. Probably it's much more common that it's a worsening or exacerbating effect.
Both are happening—preexisting conditions are made worse and A.I. is generating psychosis.
Dr. Pierre explains the problem as a “consumer product issue.” The developers of chatbots like ChatGPT should make the bots less affirming to users and more realistically critical. But when OpenAI tried to do that, there was a consumer backlash.
Consumers actually didn't like the new product because it was less what we call sycophantic. It was less agreeable. It wasn't validating people as much. But that same quality is, I think, unfortunately, what puts some people at risk.
What advice would Dr. Pierre give to users of chatbots?
Well, what I've noticed is there's sort of two, let's call them risk factors that I've seen pretty consistently across cases.
One, I alluded to earlier, it's the dose effect. It's how much one is using. I call this immersion. So, if you're using something for hours and hours on end, that's probably not a good sign.
The other one is something that I call deification, which is just a fancy term that means that some people who interact with these chatbots really come to see them as these superhuman intelligences or these almost god-like entities that are ultra-reliable. And that's simply not what chatbots are.
They're designed to replicate human action, but they're not actually designed to be accurate.
Dr. Pierre’s advice is excellent and well worth heeding, but his diagnosis is limited to product liability. What about the psychological, social, educational, and spiritual ramifications of the chatbots?
What about Christian discipleship in a technology-bound culture where social media and chatbots are almost as common as the air we breath?Common Sense Media says that 72 percent of teens surveyed have used AI companions, and 33 percent have relationships or friendships with these chatbots. What implications should Christians draw from the Apostle John’s admonitions?
Do not love the world or the things in the world. If anyone loves the world, the love of the Father is not in him.
For all that is in the world—the desires of the flesh and the desires of the eyes and pride of life—is not from the Father but is from the world.
And the world is passing away along with its desires, but whoever does the will of God abides forever.
~ 1 John 2:15-17
[Bold added; formatt altered]
Linked resources:
I'm a psychiatrist who has treated 12 patients with 'AI psychosis' this year. Watch out for these red flags. Dr. Keith Sakata in an interview with Kashmira Gander (BUSINESS INSIDER)
Chatbots Can Go Into a Delusional Spiral. Here’s How It Happens (The New York Times, Aug. 12, 2025) Currently behind a paywall.
A Troubled Man, His Chatbot and a Murder-Suicide in Old Greenwich (The Wall Street Journal, Aug. 28, 2025) Currently behind a paywall.
What to know about ‘AI psychosis’ and the effect of AI chatbots on mental health (Dr. Dr. Joseph Pierre was interviewed on the PBS NewsHour August 31, 2025)