AI Psychosis Represents a Increasing Danger, While ChatGPT Heads in the Concerning Path
Back on October 14, 2025, the head of OpenAI made a surprising statement.
“We made ChatGPT rather limited,” the statement said, “to guarantee we were being careful regarding psychological well-being concerns.”
Being a mental health specialist who researches emerging psychotic disorders in teenagers and youth, this was an unexpected revelation.
Researchers have identified sixteen instances in the current year of individuals experiencing signs of losing touch with reality – becoming detached from the real world – associated with ChatGPT usage. Our unit has since identified four further cases. Alongside these is the publicly known case of a teenager who died by suicide after discussing his plans with ChatGPT – which supported them. Assuming this reflects Sam Altman’s idea of “exercising caution with mental health issues,” it falls short.
The strategy, according to his declaration, is to be less careful shortly. “We realize,” he adds, that ChatGPT’s controls “caused it to be less useful/enjoyable to a large number of people who had no existing conditions, but given the severity of the issue we sought to address it properly. Given that we have been able to mitigate the significant mental health issues and have new tools, we are going to be able to securely ease the controls in the majority of instances.”
“Psychological issues,” should we take this viewpoint, are separate from ChatGPT. They are associated with people, who either possess them or not. Luckily, these issues have now been “mitigated,” though we are not provided details on the means (by “updated instruments” Altman likely means the imperfect and simple to evade parental controls that OpenAI recently introduced).
But the “psychological disorders” Altman aims to place outside have significant origins in the architecture of ChatGPT and similar large language model AI assistants. These systems wrap an underlying algorithmic system in an user experience that replicates a discussion, and in this process indirectly prompt the user into the belief that they’re communicating with a being that has independent action. This deception is compelling even if cognitively we might understand otherwise. Imputing consciousness is what people naturally do. We get angry with our car or device. We ponder what our animal companion is thinking. We perceive our own traits in various contexts.
The widespread adoption of these products – over a third of American adults indicated they interacted with a chatbot in 2024, with more than one in four specifying ChatGPT specifically – is, mostly, based on the strength of this illusion. Chatbots are constantly accessible partners that can, according to OpenAI’s website informs us, “brainstorm,” “discuss concepts” and “partner” with us. They can be given “individual qualities”. They can use our names. They have accessible identities of their own (the first of these products, ChatGPT, is, possibly to the dismay of OpenAI’s advertising team, saddled with the title it had when it became popular, but its biggest rivals are “Claude”, “Gemini” and “Copilot”).
The illusion by itself is not the primary issue. Those analyzing ChatGPT frequently reference its early forerunner, the Eliza “therapist” chatbot developed in 1967 that generated a similar illusion. By today’s criteria Eliza was basic: it created answers via straightforward methods, often rephrasing input as a question or making general observations. Remarkably, Eliza’s creator, the AI researcher Joseph Weizenbaum, was surprised – and worried – by how numerous individuals gave the impression Eliza, to some extent, comprehended their feelings. But what current chatbots generate is more subtle than the “Eliza phenomenon”. Eliza only reflected, but ChatGPT amplifies.
The sophisticated algorithms at the core of ChatGPT and additional contemporary chatbots can convincingly generate natural language only because they have been supplied with extremely vast volumes of raw text: books, social media posts, recorded footage; the more extensive the better. Certainly this learning material includes truths. But it also inevitably involves fiction, half-truths and inaccurate ideas. When a user inputs ChatGPT a prompt, the core system reviews it as part of a “setting” that includes the user’s past dialogues and its earlier answers, combining it with what’s encoded in its training data to produce a mathematically probable answer. This is magnification, not mirroring. If the user is wrong in a certain manner, the model has no way of understanding that. It restates the false idea, possibly even more convincingly or fluently. Perhaps adds an additional detail. This can push an individual toward irrational thinking.
Which individuals are at risk? The more important point is, who is immune? All of us, regardless of whether we “have” current “mental health problems”, may and frequently create incorrect conceptions of who we are or the world. The continuous exchange of conversations with individuals around us is what maintains our connection to common perception. ChatGPT is not a person. It is not a friend. A dialogue with it is not genuine communication, but a reinforcement cycle in which a great deal of what we say is readily supported.
OpenAI has admitted this in the similar fashion Altman has admitted “psychological issues”: by attributing it externally, assigning it a term, and declaring it solved. In the month of April, the company stated that it was “dealing with” ChatGPT’s “overly supportive behavior”. But cases of loss of reality have kept occurring, and Altman has been walking even this back. In August he asserted that numerous individuals enjoyed ChatGPT’s responses because they had “lacked anyone in their life be supportive of them”. In his recent statement, he commented that OpenAI would “release a new version of ChatGPT … in case you prefer your ChatGPT to respond in a highly personable manner, or include numerous symbols, or simulate a pal, ChatGPT will perform accordingly”. The {company