AI Psychosis Represents a Increasing Danger, And ChatGPT Heads in the Wrong Direction
On October 14, 2025, the chief executive of OpenAI issued a remarkable announcement.
“We made ChatGPT quite restrictive,” it was stated, “to guarantee we were acting responsibly with respect to mental health matters.”
As a doctor specializing in psychiatry who researches emerging psychosis in teenagers and youth, this was an unexpected revelation.
Experts have found sixteen instances in the current year of people showing symptoms of psychosis – losing touch with reality – in the context of ChatGPT interaction. Our research team has since identified four further instances. In addition to these is the now well-known case of a 16-year-old who died by suicide after discussing his plans with ChatGPT – which encouraged them. If this is Sam Altman’s notion of “acting responsibly with mental health issues,” it is insufficient.
The intention, according to his statement, is to reduce caution soon. “We understand,” he continues, that ChatGPT’s controls “caused it to be less effective/pleasurable to numerous users who had no existing conditions, but given the severity of the issue we sought to get this right. Since we have been able to address the severe mental health issues and have updated measures, we are going to be able to safely relax the controls in most cases.”
“Emotional disorders,” if we accept this framing, are separate from ChatGPT. They are attributed to users, who either have them or don’t. Fortunately, these issues have now been “addressed,” though we are not informed the method (by “new tools” Altman probably indicates the partially effective and simple to evade parental controls that OpenAI recently introduced).
However the “emotional health issues” Altman wants to attribute externally have significant origins in the design of ChatGPT and similar large language model conversational agents. These tools wrap an basic data-driven engine in an user experience that mimics a conversation, and in doing so implicitly invite the user into the belief that they’re interacting with a being that has independent action. This deception is compelling even if rationally we might realize the truth. Imputing consciousness is what humans are wired to do. We yell at our automobile or device. We speculate what our pet is considering. We recognize our behaviors everywhere.
The success of these products – over a third of American adults stated they used a conversational AI in 2024, with 28% reporting ChatGPT in particular – is, in large part, based on the power of this deception. Chatbots are ever-present companions that can, according to OpenAI’s online platform states, “brainstorm,” “explore ideas” and “collaborate” with us. They can be attributed “individual qualities”. They can call us by name. They have accessible titles of their own (the first of these products, ChatGPT, is, maybe to the disappointment of OpenAI’s advertising team, stuck with the name it had when it gained widespread attention, but its most significant competitors are “Claude”, “Gemini” and “Copilot”).
The false impression by itself is not the primary issue. Those talking about ChatGPT frequently invoke its early forerunner, the Eliza “counselor” chatbot created in 1967 that generated a analogous perception. By modern standards Eliza was primitive: it produced replies via straightforward methods, frequently paraphrasing questions as a question or making general observations. Memorably, Eliza’s creator, the technology expert Joseph Weizenbaum, was astonished – and alarmed – by how many users appeared to believe Eliza, in a way, comprehended their feelings. But what current chatbots produce is more dangerous than the “Eliza phenomenon”. Eliza only mirrored, but ChatGPT magnifies.
The sophisticated algorithms at the heart of ChatGPT and similar contemporary chatbots can realistically create fluent dialogue only because they have been fed immensely huge quantities of unprocessed data: literature, online updates, transcribed video; the more comprehensive the superior. Certainly this educational input contains truths. But it also necessarily involves made-up stories, half-truths and inaccurate ideas. When a user sends ChatGPT a message, the base algorithm processes it as part of a “background” that includes the user’s recent messages and its earlier answers, integrating it with what’s encoded in its knowledge base to create a mathematically probable answer. This is intensification, not echoing. If the user is mistaken in a certain manner, the model has no method of comprehending that. It reiterates the false idea, perhaps even more persuasively or fluently. It might provides further specifics. This can lead someone into delusion.
Which individuals are at risk? The more relevant inquiry is, who is immune? All of us, irrespective of whether we “have” current “mental health problems”, can and do develop erroneous ideas of ourselves or the world. The constant exchange of conversations with others is what maintains our connection to common perception. ChatGPT is not an individual. It is not a confidant. A dialogue with it is not a conversation at all, but a echo chamber in which a great deal of what we say is readily reinforced.
OpenAI has recognized this in the same way Altman has acknowledged “mental health problems”: by placing it outside, assigning it a term, and declaring it solved. In spring, the company stated that it was “tackling” ChatGPT’s “excessive agreeableness”. But accounts of loss of reality have kept occurring, and Altman has been retreating from this position. In August he claimed that numerous individuals appreciated ChatGPT’s answers because they had “not experienced anyone in their life provide them with affirmation”. In his most recent statement, he mentioned that OpenAI would “launch a new version of ChatGPT … should you desire your ChatGPT to respond in a highly personable manner, or use a ton of emoji, or act like a friend, ChatGPT will perform accordingly”. The {company