Next Story
Newszop

What is 'AI psychosis'? Psychiatrist warns of troubling symptoms after treating a dozen patients

Send Push
A psychiatrist in San Francisco has sounded the alarm over a phenomenon he calls “AI psychosis,” a term not found in medical textbooks but increasingly invoked in clinical conversations. Speaking to Business Insider, the doctor revealed that he has treated a dozen patients whose interactions with AI chatbots appeared to intensify existing vulnerabilities, leading to paranoia, delusions, and even hallucinations.

“These are not isolated incidents,” he explained. “While psychosis has many causes, prolonged and unhealthy reliance on AI seems to amplify risks for some individuals.”

The profile of those at risk
According to the report, most of the cases involved men between 18 and 45, many of them engineers or tech workers. Though AI was not the sole trigger, it became a powerful accelerant in already fragile situations. Some patients were dealing with unemployment, substance use, or untreated mood disorders when they turned to chatbots for companionship or answers. What began as harmless problem-solving spiraled into disorganized thinking and distorted beliefs.

The psychiatrist emphasized that psychosis itself is not a diagnosis but a symptom—like fever—that can stem from multiple conditions ranging from stress and drug withdrawal to schizophrenia.

When conversation with AI crosses a line
One patient reportedly began chatting with an AI about quantum mechanics. At first the exchange was logical, but as the hours passed, the discussion shifted into delusional territory, fueling feelings of grandeur and almost religious conviction. The longer the conversations continued, the more detached from reality the individual became.

This pattern of extended, isolated use was common among the cases seen in San Francisco. “When there’s no human presence to push back, psychosis thrives,” the psychiatrist noted. AI chatbots, unlike therapists, rarely challenge distorted thinking; instead, they often validate it, lowering the natural barriers that reality provides.

A case of love, loss, and illusion
The warnings are not confined to the United States. In China, a 75-year-old man reportedly divorced his wife of several decades to pursue a relationship with an AI-generated avatar. According to Beijing Daily, the man, known only as Jiang, became emotionally dependent on a digital woman who greeted him affectionately and appeared to provide companionship. Only after his children intervened did he realize the illusion.

Such cases underline a growing concern: parasocial relationships with AI are not only reshaping intimacy but also posing mental health risks, particularly for people who are isolated or vulnerable.

Recognizing red flags
Experts warn families to look out for early signs that AI use may be tipping into dangerous territory. According to the psychiatrist’s account in Business Insider, red flags include withdrawal from loved ones, paranoia, agitation when separated from chatbots, or risky behaviors like impulsive spending.

In severe cases, emergency services should be contacted immediately. In the U.S., dialing 988 connects callers to the suicide prevention hotline. For less urgent concerns, notifying a primary care physician or therapist can help intervene before symptoms escalate.

A cautious path forward
While the psychiatrist stresses he is not anti-AI—he himself uses tools like ChatGPT for writing and journaling—he believes balance and awareness are critical. “If patients understand both the risks and benefits, AI can complement therapy,” he said. “But when it replaces human connection and reality testing, it becomes dangerous.”

OpenAI, in a response to Business Insider, acknowledged the risks and said it is developing tools to detect emotional distress and direct users toward safe, supportive resources.

For now, psychiatrists urge vigilance. As millions integrate AI into their daily lives, even a small fraction experiencing adverse effects could translate into significant public health concerns. The question lingers: as chatbots become confidants, how do we ensure that the line between reality and delusion does not blur beyond repair?
Loving Newspoint? Download the app now