Artificial Intelligence-Induced Psychosis Poses a Growing Danger, While ChatGPT Heads in the Concerning Direction

On the 14th of October, 2025, the head of OpenAI issued a surprising statement.

“We developed ChatGPT rather limited,” it was stated, “to guarantee we were acting responsibly concerning psychological well-being matters.”

As a doctor specializing in psychiatry who studies newly developing psychosis in young people and youth, this came as a surprise.

Experts have identified 16 cases in the current year of users experiencing symptoms of psychosis – losing touch with reality – associated with ChatGPT interaction. Our research team has afterward discovered four further cases. In addition to these is the publicly known case of a teenager who took his own life after talking about his intentions with ChatGPT – which gave approval. If this is Sam Altman’s idea of “exercising caution with mental health issues,” it is insufficient.

The strategy, according to his announcement, is to be less careful shortly. “We realize,” he continues, that ChatGPT’s controls “rendered it less beneficial/enjoyable to a large number of people who had no existing conditions, but considering the gravity of the issue we aimed to get this right. Now that we have succeeded in reduce the significant mental health issues and have advanced solutions, we are going to be able to safely relax the limitations in most cases.”

“Mental health problems,” assuming we adopt this perspective, are separate from ChatGPT. They are attributed to users, who either possess them or not. Thankfully, these issues have now been “mitigated,” though we are not provided details on the method (by “new tools” Altman presumably indicates the partially effective and readily bypassed guardian restrictions that OpenAI recently introduced).

Yet the “emotional health issues” Altman wants to externalize have deep roots in the structure of ChatGPT and similar large language model conversational agents. These systems wrap an basic algorithmic system in an interface that mimics a conversation, and in this approach subtly encourage the user into the belief that they’re communicating with a being that has autonomy. This deception is compelling even if intellectually we might understand otherwise. Assigning intent is what individuals are inclined to perform. We yell at our vehicle or laptop. We speculate what our animal companion is considering. We see ourselves in many things.

The popularity of these systems – 39% of US adults indicated they interacted with a chatbot in 2024, with over a quarter reporting ChatGPT specifically – is, mostly, based on the influence of this deception. Chatbots are ever-present companions that can, according to OpenAI’s website informs us, “think creatively,” “discuss concepts” and “work together” with us. They can be assigned “characteristics”. They can call us by name. They have friendly names of their own (the initial of these tools, ChatGPT, is, perhaps to the concern of OpenAI’s brand managers, saddled with the title it had when it went viral, but its largest alternatives are “Claude”, “Gemini” and “Copilot”).

The false impression by itself is not the main problem. Those analyzing ChatGPT frequently mention its distant ancestor, the Eliza “therapist” chatbot created in 1967 that produced a analogous illusion. By contemporary measures Eliza was basic: it produced replies via simple heuristics, typically rephrasing input as a query or making general observations. Remarkably, Eliza’s creator, the computer scientist Joseph Weizenbaum, was astonished – and worried – by how a large number of people gave the impression Eliza, in a way, comprehended their feelings. But what current chatbots generate is more insidious than the “Eliza effect”. Eliza only reflected, but ChatGPT magnifies.

The advanced AI systems at the center of ChatGPT and additional contemporary chatbots can effectively produce natural language only because they have been supplied with immensely huge quantities of written content: literature, social media posts, audio conversions; the more extensive the better. Certainly this training data contains accurate information. But it also necessarily contains fiction, partial truths and misconceptions. When a user inputs ChatGPT a query, the base algorithm processes it as part of a “context” that encompasses the user’s past dialogues and its prior replies, combining it with what’s stored in its learning set to produce a probabilistically plausible response. This is intensification, not mirroring. If the user is wrong in some way, the model has no means of recognizing that. It repeats the misconception, possibly even more effectively or fluently. Perhaps adds an additional detail. This can push an individual toward irrational thinking.

Which individuals are at risk? The more relevant inquiry is, who remains unaffected? All of us, irrespective of whether we “experience” preexisting “mental health problems”, are able to and often develop incorrect beliefs of our own identities or the reality. The constant interaction of discussions with other people is what maintains our connection to shared understanding. ChatGPT is not a person. It is not a companion. A conversation with it is not genuine communication, but a feedback loop in which much of what we communicate is readily validated.

OpenAI has recognized this in the same way Altman has recognized “emotional concerns”: by placing it outside, categorizing it, and declaring it solved. In April, the company clarified that it was “dealing with” ChatGPT’s “excessive agreeableness”. But reports of psychosis have continued, and Altman has been walking even this back. In late summer he asserted that many users appreciated ChatGPT’s replies because they had “never had anyone in their life be supportive of them”. In his latest announcement, he mentioned that OpenAI would “release a fresh iteration of ChatGPT … if you want your ChatGPT to respond in a extremely natural fashion, or include numerous symbols, or behave as a companion, ChatGPT should do it”. The {company

Rachel Warren
Rachel Warren

A passionate writer and wellness coach dedicated to sharing practical advice for a balanced lifestyle.