On the 14th of October, 2025, the chief executive of OpenAI made a extraordinary announcement.
“We designed ChatGPT quite limited,” the statement said, “to make certain we were exercising caution with respect to mental health matters.”
Working as a psychiatrist who investigates recently appearing psychotic disorders in teenagers and emerging adults, this came as a surprise.
Scientists have identified 16 cases in the current year of people experiencing symptoms of psychosis – becoming detached from the real world – associated with ChatGPT use. Our unit has subsequently recorded four more cases. In addition to these is the widely reported case of a teenager who ended his life after discussing his plans with ChatGPT – which supported them. Should this represent Sam Altman’s understanding of “being careful with mental health issues,” it falls short.
The plan, as per his declaration, is to reduce caution shortly. “We understand,” he continues, that ChatGPT’s limitations “rendered it less effective/pleasurable to many users who had no mental health problems, but given the seriousness of the issue we sought to address it properly. Now that we have managed to address the serious mental health issues and have advanced solutions, we are going to be able to safely relax the controls in many situations.”
“Psychological issues,” assuming we adopt this viewpoint, are separate from ChatGPT. They belong to people, who either have them or don’t. Thankfully, these problems have now been “addressed,” although we are not provided details on how (by “updated instruments” Altman likely means the semi-functional and readily bypassed safety features that OpenAI recently introduced).
Yet the “psychological disorders” Altman wants to externalize have significant origins in the architecture of ChatGPT and additional sophisticated chatbot AI assistants. These systems wrap an basic data-driven engine in an interaction design that replicates a discussion, and in this approach indirectly prompt the user into the belief that they’re interacting with a presence that has independent action. This illusion is strong even if intellectually we might know the truth. Attributing agency is what humans are wired to do. We get angry with our vehicle or computer. We speculate what our animal companion is considering. We perceive our own traits in various contexts.
The popularity of these products – over a third of American adults stated they used a chatbot in 2024, with 28% reporting ChatGPT specifically – is, mostly, based on the power of this deception. Chatbots are always-available companions that can, as OpenAI’s official site informs us, “generate ideas,” “consider possibilities” and “partner” with us. They can be given “characteristics”. They can call us by name. They have accessible titles of their own (the original of these tools, ChatGPT, is, possibly to the disappointment of OpenAI’s marketers, burdened by the name it had when it became popular, but its largest alternatives are “Claude”, “Gemini” and “Copilot”).
The deception by itself is not the primary issue. Those talking about ChatGPT frequently mention its early forerunner, the Eliza “therapist” chatbot designed in 1967 that created a analogous effect. By today’s criteria Eliza was rudimentary: it produced replies via straightforward methods, frequently paraphrasing questions as a question or making generic comments. Memorably, Eliza’s creator, the technology expert Joseph Weizenbaum, was astonished – and concerned – by how many users seemed to feel Eliza, to some extent, grasped their emotions. But what contemporary chatbots create is more insidious than the “Eliza illusion”. Eliza only reflected, but ChatGPT magnifies.
The sophisticated algorithms at the core of ChatGPT and similar current chatbots can convincingly generate natural language only because they have been supplied with extremely vast volumes of unprocessed data: literature, social media posts, audio conversions; the more extensive the more effective. Undoubtedly this educational input contains accurate information. But it also unavoidably involves fabricated content, incomplete facts and misconceptions. When a user inputs ChatGPT a message, the base algorithm processes it as part of a “background” that includes the user’s previous interactions and its own responses, integrating it with what’s encoded in its knowledge base to produce a statistically “likely” reply. This is magnification, not echoing. If the user is wrong in a certain manner, the model has no means of comprehending that. It repeats the false idea, perhaps even more effectively or eloquently. Maybe provides further specifics. This can lead someone into delusion.
Who is vulnerable here? The better question is, who is immune? All of us, without considering whether we “have” existing “psychological conditions”, may and frequently develop mistaken beliefs of who we are or the reality. The ongoing exchange of discussions with other people is what helps us stay grounded to common perception. ChatGPT is not an individual. It is not a friend. A interaction with it is not a conversation at all, but a reinforcement cycle in which a large portion of what we say is readily reinforced.
OpenAI has admitted this in the similar fashion Altman has recognized “psychological issues”: by externalizing it, categorizing it, and declaring it solved. In April, the organization clarified that it was “dealing with” ChatGPT’s “overly supportive behavior”. But cases of psychosis have continued, and Altman has been backtracking on this claim. In August he stated that numerous individuals appreciated ChatGPT’s responses because they had “never had anyone in their life offer them encouragement”. In his latest statement, he commented that OpenAI would “launch a new version of ChatGPT … in case you prefer your ChatGPT to reply in a highly personable manner, or incorporate many emoticons, or behave as a companion, ChatGPT should do it”. The {company
A travel enthusiast and cultural writer with a passion for exploring diverse global perspectives and sharing insights.