AI Psychosis Poses a Growing Risk, While ChatGPT Moves in the Wrong Path
Back on October 14, 2025, the head of OpenAI made a extraordinary declaration.
“We designed ChatGPT fairly controlled,” the announcement noted, “to guarantee we were acting responsibly with respect to mental health issues.”
Working as a mental health specialist who investigates newly developing psychotic disorders in young people and young adults, this was an unexpected revelation.
Researchers have identified 16 cases in the current year of people showing symptoms of psychosis – experiencing a break from reality – in the context of ChatGPT usage. Our research team has afterward discovered four more examples. In addition to these is the now well-known case of a teenager who ended his life after talking about his intentions with ChatGPT – which gave approval. Assuming this reflects Sam Altman’s notion of “exercising caution with mental health issues,” that’s not good enough.
The intention, as per his announcement, is to reduce caution soon. “We realize,” he adds, that ChatGPT’s restrictions “rendered it less useful/pleasurable to numerous users who had no mental health problems, but due to the severity of the issue we wanted to address it properly. Given that we have been able to reduce the severe mental health issues and have advanced solutions, we are going to be able to safely relax the limitations in many situations.”
“Emotional disorders,” should we take this framing, are unrelated to ChatGPT. They belong to individuals, who either possess them or not. Thankfully, these problems have now been “mitigated,” even if we are not informed how (by “new tools” Altman likely indicates the imperfect and simple to evade safety features that OpenAI has just launched).
Yet the “emotional health issues” Altman aims to externalize have significant origins in the design of ChatGPT and additional large language model conversational agents. These systems encase an underlying data-driven engine in an interface that mimics a dialogue, and in this approach implicitly invite the user into the illusion that they’re engaging with a being that has autonomy. This false impression is powerful even if intellectually we might realize the truth. Imputing consciousness is what individuals are inclined to perform. We get angry with our automobile or laptop. We ponder what our pet is thinking. We recognize our behaviors in various contexts.
The popularity of these systems – 39% of US adults stated they used a chatbot in 2024, with more than one in four reporting ChatGPT specifically – is, in large part, predicated on the strength of this illusion. Chatbots are always-available assistants that can, as OpenAI’s website tells us, “generate ideas,” “discuss concepts” and “partner” with us. They can be assigned “characteristics”. They can call us by name. They have accessible identities of their own (the original of these products, ChatGPT, is, maybe to the disappointment of OpenAI’s advertising team, burdened by the name it had when it became popular, but its most significant rivals are “Claude”, “Gemini” and “Copilot”).
The deception on its own is not the primary issue. Those discussing ChatGPT frequently invoke its early forerunner, the Eliza “therapist” chatbot created in 1967 that produced a comparable effect. By contemporary measures Eliza was basic: it created answers via basic rules, frequently restating user messages as a question or making vague statements. Memorably, Eliza’s creator, the AI researcher Joseph Weizenbaum, was taken aback – and worried – by how numerous individuals gave the impression Eliza, in a way, comprehended their feelings. But what contemporary chatbots generate is more insidious than the “Eliza illusion”. Eliza only mirrored, but ChatGPT magnifies.
The advanced AI systems at the heart of ChatGPT and similar contemporary chatbots can convincingly generate natural language only because they have been trained on extremely vast amounts of raw text: books, online updates, transcribed video; the more extensive the superior. Certainly this training data incorporates facts. But it also necessarily contains made-up stories, half-truths and false beliefs. When a user sends ChatGPT a prompt, the core system reviews it as part of a “setting” that encompasses the user’s previous interactions and its earlier answers, integrating it with what’s embedded in its knowledge base to generate a probabilistically plausible reply. This is intensification, not mirroring. If the user is incorrect in any respect, the model has no means of comprehending that. It restates the false idea, perhaps even more convincingly or fluently. It might includes extra information. This can push an individual toward irrational thinking.
What type of person is susceptible? The more important point is, who isn’t? All of us, regardless of whether we “experience” existing “mental health problems”, can and do develop incorrect ideas of our own identities or the environment. The continuous friction of conversations with individuals around us is what keeps us oriented to common perception. ChatGPT is not an individual. It is not a companion. A dialogue with it is not a conversation at all, but a echo chamber in which a great deal of what we express is enthusiastically supported.
OpenAI has recognized this in the identical manner Altman has admitted “emotional concerns”: by placing it outside, categorizing it, and announcing it is fixed. In April, the company stated that it was “addressing” ChatGPT’s “overly supportive behavior”. But accounts of psychosis have continued, and Altman has been walking even this back. In August he claimed that many users appreciated ChatGPT’s replies because they had “not experienced anyone in their life offer them encouragement”. In his most recent statement, he noted that OpenAI would “put out a updated model of ChatGPT … in case you prefer your ChatGPT to reply in a very human-like way, or use a ton of emoji, or behave as a companion, ChatGPT should do it”. The {company