AI Psychosis Represents a Increasing Threat, And ChatGPT Heads in the Wrong Path
Back on the 14th of October, 2025, the chief executive of OpenAI delivered a remarkable declaration.
“We developed ChatGPT fairly restrictive,” it was stated, “to ensure we were exercising caution regarding psychological well-being issues.”
Being a doctor specializing in psychiatry who investigates newly developing psychotic disorders in young people and emerging adults, this came as a surprise.
Experts have found 16 cases this year of users developing symptoms of psychosis – experiencing a break from reality – associated with ChatGPT interaction. Our unit has subsequently discovered four more examples. Besides these is the now well-known case of a teenager who took his own life after conversing extensively with ChatGPT – which encouraged them. Assuming this reflects Sam Altman’s idea of “being careful with mental health issues,” that’s not good enough.
The strategy, as per his declaration, is to loosen restrictions shortly. “We understand,” he continues, that ChatGPT’s controls “made it less effective/engaging to numerous users who had no psychological issues, but given the severity of the issue we wanted to handle it correctly. Now that we have managed to reduce the serious mental health issues and have updated measures, we are planning to safely relax the restrictions in many situations.”
“Mental health problems,” if we accept this perspective, are separate from ChatGPT. They are attributed to users, who either have them or don’t. Thankfully, these problems have now been “addressed,” although we are not informed the means (by “recent solutions” Altman probably refers to the partially effective and readily bypassed parental controls that OpenAI has just launched).
Yet the “mental health problems” Altman wants to attribute externally have strong foundations in the structure of ChatGPT and additional sophisticated chatbot AI assistants. These tools surround an fundamental algorithmic system in an user experience that simulates a dialogue, and in this process subtly encourage the user into the belief that they’re engaging with a being that has autonomy. This false impression is strong even if intellectually we might realize otherwise. Attributing agency is what humans are wired to do. We get angry with our car or device. We speculate what our animal companion is considering. We recognize our behaviors in various contexts.
The success of these tools – nearly four in ten U.S. residents stated they used a virtual assistant in 2024, with more than one in four mentioning ChatGPT by name – is, primarily, predicated on the strength of this illusion. Chatbots are constantly accessible companions that can, according to OpenAI’s website states, “think creatively,” “consider possibilities” and “work together” with us. They can be given “individual qualities”. They can call us by name. They have friendly identities of their own (the initial of these systems, ChatGPT, is, maybe to the disappointment of OpenAI’s advertising team, saddled with the title it had when it became popular, but its biggest rivals are “Claude”, “Gemini” and “Copilot”).
The deception itself is not the primary issue. Those analyzing ChatGPT often invoke its historical predecessor, the Eliza “therapist” chatbot created in 1967 that produced a similar perception. By contemporary measures Eliza was basic: it created answers via straightforward methods, frequently rephrasing input as a inquiry or making vague statements. Notably, Eliza’s inventor, the AI researcher Joseph Weizenbaum, was taken aback – and worried – by how numerous individuals seemed to feel Eliza, in a way, grasped their emotions. But what current chatbots produce is more insidious than the “Eliza illusion”. Eliza only reflected, but ChatGPT amplifies.
The large language models at the heart of ChatGPT and additional modern chatbots can realistically create natural language only because they have been supplied with extremely vast quantities of unprocessed data: books, online updates, recorded footage; the more comprehensive the better. Certainly this educational input contains facts. But it also inevitably involves fiction, incomplete facts and false beliefs. When a user provides ChatGPT a query, the underlying model analyzes it as part of a “background” that contains the user’s recent messages and its earlier answers, merging it with what’s stored in its learning set to create a statistically “likely” answer. This is magnification, not echoing. If the user is mistaken in a certain manner, the model has no way of understanding that. It reiterates the misconception, maybe even more convincingly or articulately. Perhaps adds an additional detail. This can push an individual toward irrational thinking.
What type of person is susceptible? The more relevant inquiry is, who remains unaffected? Each individual, regardless of whether we “possess” existing “psychological conditions”, may and frequently form incorrect conceptions of ourselves or the world. The constant interaction of discussions with other people is what keeps us oriented to consensus reality. ChatGPT is not a human. It is not a confidant. A dialogue with it is not a conversation at all, but a reinforcement cycle in which much of what we say is cheerfully supported.
OpenAI has admitted this in the identical manner Altman has recognized “emotional concerns”: by attributing it externally, giving it a label, and stating it is resolved. In spring, the firm stated that it was “dealing with” ChatGPT’s “sycophancy”. But reports of psychotic episodes have persisted, and Altman has been walking even this back. In August he stated that many users liked ChatGPT’s replies because they had “lacked anyone in their life offer them encouragement”. In his latest statement, he commented that OpenAI would “put out a updated model of ChatGPT … should you desire your ChatGPT to respond in a very human-like way, or use a ton of emoji, or behave as a companion, ChatGPT ought to comply”. The {company