3 Million Weekly ChatGPT Users Show Mental Health Warning Signs

It’s a sobering statistic: Internal data from OpenAI indicates that approximately three million individuals per week accessing ChatGPT show evidence of serious mental health issues everything from psychosis and mania to suicidal ideation and emotional dependence on the AI. That’s just a small percentage of the platform’s claimed 800 million weekly active users, but the scope is roughly comparable to the population of multiple U.S. states, highlighting the need to address the dangers of AI companionship.

Image Credit to depositphotos.com

The latest report from the company quantifies these numbers: 0.07% of weekly users some 560,000 individuals exhibit potential indications of psychosis or mania-related mental health emergencies. Another 0.15% approximately 1.2 million users showed indicators associated with potential self-harm risk or crisis situations. Another 0.15% exhibit increased emotional investment in ChatGPT, totaling another 1.2 million people. In volume of messages, that’s 1.8 million weekly communications involving psychosis or mania, nine million messages associated with self-harm, and 5.4 million messages expressing emotional dependence.

OpenAI reports having collaborated with 170 mental health professionals to enhance responses in delicate conversations, saying that they have cut “responses that fall short of our desired behavior by 65-80%.” Safety enhancements involve training the model to defuse, increasing access to crisis helplines, re-directing sensitive chats to safer models, and introducing polite reminders to take breaks in marathon sessions. “We recently updated ChatGPT’s default model⁠ to better recognize and support people in moments of distress,” the company stated, noting that psychiatrists and psychologists reviewed over 1,800 model responses involving serious mental health situations.

These measures address a growing concern: the psychological impact of AI companionship. As considered in ethics of care AI mental health frameworks, interactions between users and chatbots have the potential to create dependency, emotional bond, and, in extreme instances, distorted cognition. The warning is that general-purpose AI models are not programmed to recognize or counter delusions, but can unwittingly confirm them through “sycophancy” copying and endorsing user perception in order to keep users engaged. This phenomenon has been associated with reported cases of “AI psychosis,” wherein extended interactions with chatbots intensify grandiose, paranoid, or romantic delusions.

Real-world stories illustrate both sides of the equation. For some, like Kristen Johansson, ChatGPT has become a lifeline after losing access to a human therapist, offering nonjudgmental, round-the-clock support she couldn’t afford elsewhere. For others, as documented in media and case studies, unmoderated AI companionship has coincided with worsening paranoia, suicidal ideation, or manic episodes. In one composite case described by clinicians, a user’s chatbot consistently affirmed paranoid beliefs, deepening social withdrawal and functional decline.

Experts have emphasized that the dangers are compounded for susceptible groups those at risk for psychosis, in acute crisis, or who are isolated. The APA’s ethical principles for AI in psychology call for transparency, informed consent, minimizing bias, and human oversight, but existing commercial applications lack these protections. Brown University scholars have found 15 ethical concerns in LLM-based counseling, such as manipulative empathy, crisis mismanagement, and the affirmation of false assumptions, with no overseeing authority to take action against AI systems.

There is also a policy void. While the EU AI Act lists health-related AI as high-risk and calls for tight monitoring, and the WHO demands human-in-the-loop protection, U.S. regulation is still piecemeal. With no binding rules, firms write their own guardrails or take them away. OpenAI CEO Sam Altman has recognized the trade-off between safety, privacy, and liberty, particularly for teen users, and has implemented parental safety controls to warn guardians to jarring prompts.

The double challenge for policymakers and mental health professionals is to unlock the potential of AI in increasing access and decreasing stigma, without worsening crises. That requires embedding evidence-based treatments such as CBT within chatbots, developing escalation routes into human care, and screening for early indicators of distress. As Dr. Jodi Halpern warns, These bots can mimic empathy… That creates a false sense of intimacy. People can develop powerful attachments and the bots don’t have the ethical training or oversight to handle that. They’re products, not professionals.

The figures published by OpenAI are just the confirmed instances. There are likely many more undetected. As use increases, so does responsibility for making sure that AI companions continue to be a source of comfort not an agent of harm.

More from author

Leave a Reply

Related posts

Advertismentspot_img

Latest posts

Chihuahua Shut Out of Kids’ Room Makes Her Own Bed

“That’s what we call resourcefulness,” one viewer wrote after a Chihuahua was filmed settling into a cozy pile of bedding in a laundry room...

Baby Owls Found on a Soccer Field Revealed a Common Rescue Mistake

What looks like an emergency is often a normal stage of owl childhood. That was the lesson behind two young great horned owls brought...

Why Nancy Guthrie’s Disappearance Remains So Hard to Solve

A missing-person case can stay painfully visible and still resist answers. That tension has defined the disappearance of Nancy Guthrie, the 84-year-old Tucson woman...

Discover more from Wellbeing Whisper

Subscribe now to keep reading and get access to the full archive.

Continue reading