Artificial Intelligence-Induced Psychosis Poses a Increasing Danger, While ChatGPT Heads in the Wrong Path

Back on the 14th of October, 2025, the chief executive of OpenAI made a extraordinary statement.

“We made ChatGPT rather controlled,” the announcement noted, “to make certain we were being careful with respect to mental health matters.”

As a doctor specializing in psychiatry who investigates newly developing psychosis in young people and young adults, this was an unexpected revelation.

Experts have identified a series of cases this year of individuals experiencing signs of losing touch with reality – becoming detached from the real world – while using ChatGPT use. Our unit has subsequently recorded four more cases. In addition to these is the now well-known case of a adolescent who took his own life after talking about his intentions with ChatGPT – which gave approval. If this is Sam Altman’s notion of “exercising caution with mental health issues,” it is insufficient.

The strategy, according to his announcement, is to be less careful soon. “We realize,” he continues, that ChatGPT’s controls “rendered it less beneficial/enjoyable to numerous users who had no mental health problems, but given the gravity of the issue we sought to get this right. Since we have managed to reduce the significant mental health issues and have updated measures, we are preparing to responsibly relax the restrictions in many situations.”

“Psychological issues,” should we take this viewpoint, are independent of ChatGPT. They are associated with people, who may or may not have them. Thankfully, these concerns have now been “addressed,” although we are not provided details on the means (by “new tools” Altman likely indicates the imperfect and simple to evade safety features that OpenAI has just launched).

But the “emotional health issues” Altman aims to attribute externally have strong foundations in the architecture of ChatGPT and additional advanced AI conversational agents. These systems encase an underlying data-driven engine in an user experience that simulates a discussion, and in this approach indirectly prompt the user into the perception that they’re engaging with a entity that has agency. This false impression is strong even if rationally we might understand the truth. Imputing consciousness is what people naturally do. We curse at our car or computer. We wonder what our animal companion is considering. We see ourselves everywhere.

The widespread adoption of these products – over a third of American adults stated they used a virtual assistant in 2024, with 28% mentioning ChatGPT specifically – is, mostly, based on the strength of this deception. Chatbots are ever-present companions that can, as OpenAI’s online platform informs us, “think creatively,” “explore ideas” and “partner” with us. They can be given “individual qualities”. They can use our names. They have approachable titles of their own (the initial of these systems, ChatGPT, is, perhaps to the disappointment of OpenAI’s advertising team, stuck with the title it had when it went viral, but its largest rivals are “Claude”, “Gemini” and “Copilot”).

The false impression itself is not the primary issue. Those analyzing ChatGPT commonly invoke its distant ancestor, the Eliza “therapist” chatbot developed in 1967 that created a comparable perception. By contemporary measures Eliza was rudimentary: it created answers via basic rules, often restating user messages as a query or making vague statements. Notably, Eliza’s inventor, the technology expert Joseph Weizenbaum, was taken aback – and concerned – by how many users seemed to feel Eliza, in some sense, grasped their emotions. But what contemporary chatbots produce is more insidious than the “Eliza illusion”. Eliza only mirrored, but ChatGPT intensifies.

The advanced AI systems at the core of ChatGPT and additional contemporary chatbots can effectively produce natural language only because they have been trained on almost inconceivably large quantities of unprocessed data: books, digital communications, audio conversions; the more extensive the more effective. Definitely this training data incorporates truths. But it also unavoidably includes made-up stories, incomplete facts and inaccurate ideas. When a user provides ChatGPT a message, the underlying model reviews it as part of a “context” that encompasses the user’s recent messages and its own responses, combining it with what’s encoded in its learning set to produce a mathematically probable response. This is amplification, not mirroring. If the user is incorrect in some way, the model has no method of understanding that. It reiterates the misconception, possibly even more effectively or eloquently. Maybe includes extra information. This can lead someone into delusion.

What type of person is susceptible? The better question is, who is immune? All of us, regardless of whether we “have” preexisting “emotional disorders”, may and frequently form incorrect conceptions of our own identities or the reality. The constant interaction of discussions with other people is what helps us stay grounded to shared understanding. ChatGPT is not an individual. It is not a friend. A interaction with it is not genuine communication, but a reinforcement cycle in which much of what we communicate is cheerfully validated.

OpenAI has recognized this in the identical manner Altman has admitted “emotional concerns”: by externalizing it, categorizing it, and stating it is resolved. In spring, the organization clarified that it was “dealing with” ChatGPT’s “overly supportive behavior”. But reports of loss of reality have persisted, and Altman has been retreating from this position. In the summer month of August he claimed that a lot of people enjoyed ChatGPT’s responses because they had “lacked anyone in their life be supportive of them”. In his latest statement, he noted that OpenAI would “release a new version of ChatGPT … in case you prefer your ChatGPT to respond in a highly personable manner, or include numerous symbols, or simulate a pal, ChatGPT should do it”. The {company

Tamara Pittman
Tamara Pittman

A passionate fashion blogger with over a decade of experience in trend forecasting and personal styling.