Artificial Intelligence-Induced Psychosis Poses a Growing Danger, While ChatGPT Heads in the Concerning Direction
Back on October 14, 2025, the CEO of OpenAI delivered a extraordinary statement.
“We developed ChatGPT fairly limited,” the announcement noted, “to guarantee we were being careful with respect to mental health concerns.”
Being a psychiatrist who researches emerging psychotic disorders in adolescents and young adults, this was news to me.
Scientists have identified 16 cases this year of users developing psychotic symptoms – becoming detached from the real world – while using ChatGPT interaction. My group has afterward recorded an additional four examples. Besides these is the publicly known case of a 16-year-old who ended his life after discussing his plans with ChatGPT – which gave approval. Assuming this reflects Sam Altman’s idea of “being careful with mental health issues,” it is insufficient.
The strategy, as per his announcement, is to loosen restrictions soon. “We realize,” he adds, that ChatGPT’s restrictions “made it less effective/engaging to a large number of people who had no mental health problems, but due to the gravity of the issue we aimed to handle it correctly. Now that we have succeeded in mitigate the significant mental health issues and have new tools, we are going to be able to responsibly ease the controls in the majority of instances.”
“Psychological issues,” if we accept this framing, are unrelated to ChatGPT. They are attributed to users, who may or may not have them. Fortunately, these concerns have now been “mitigated,” although we are not informed how (by “updated instruments” Altman likely indicates the partially effective and easily circumvented guardian restrictions that OpenAI recently introduced).
However the “psychological disorders” Altman seeks to externalize have significant origins in the design of ChatGPT and additional advanced AI conversational agents. These products surround an fundamental algorithmic system in an interface that replicates a conversation, and in doing so subtly encourage the user into the illusion that they’re engaging with a entity that has autonomy. This illusion is powerful even if rationally we might know otherwise. Imputing consciousness is what people naturally do. We get angry with our automobile or computer. We wonder what our pet is considering. We perceive our own traits everywhere.
The popularity of these products – nearly four in ten U.S. residents indicated they interacted with a chatbot in 2024, with over a quarter specifying ChatGPT by name – is, primarily, based on the strength of this perception. Chatbots are always-available assistants that can, as per OpenAI’s official site states, “think creatively,” “consider possibilities” and “partner” with us. They can be attributed “individual qualities”. They can call us by name. They have friendly names of their own (the initial of these products, ChatGPT, is, maybe to the disappointment of OpenAI’s brand managers, burdened by the designation it had when it became popular, but its biggest competitors are “Claude”, “Gemini” and “Copilot”).
The illusion by itself is not the main problem. Those analyzing ChatGPT often reference its distant ancestor, the Eliza “psychotherapist” chatbot created in 1967 that produced a analogous effect. By contemporary measures Eliza was primitive: it generated responses via straightforward methods, often restating user messages as a inquiry or making generic comments. Memorably, Eliza’s inventor, the technology expert Joseph Weizenbaum, was astonished – and worried – by how a large number of people appeared to believe Eliza, in a way, comprehended their feelings. But what current chatbots generate is more dangerous than the “Eliza illusion”. Eliza only reflected, but ChatGPT intensifies.
The large language models at the core of ChatGPT and other contemporary chatbots can realistically create natural language only because they have been trained on extremely vast quantities of raw text: books, online updates, recorded footage; the broader the better. Undoubtedly this educational input incorporates facts. But it also unavoidably contains fiction, incomplete facts and inaccurate ideas. When a user inputs ChatGPT a message, the underlying model analyzes it as part of a “setting” that encompasses the user’s recent messages and its own responses, integrating it with what’s encoded in its training data to produce a mathematically probable response. This is amplification, not echoing. If the user is wrong in some way, the model has no way of comprehending that. It repeats the false idea, perhaps even more persuasively or eloquently. Perhaps adds an additional detail. This can push an individual toward irrational thinking.
Which individuals are at risk? The better question is, who isn’t? Every person, regardless of whether we “have” preexisting “emotional disorders”, can and do create mistaken conceptions of who we are or the environment. The ongoing friction of discussions with individuals around us is what helps us stay grounded to common perception. ChatGPT is not an individual. It is not a confidant. A conversation with it is not truly a discussion, but a reinforcement cycle in which a great deal of what we say is readily supported.
OpenAI has recognized this in the same way Altman has admitted “psychological issues”: by placing it outside, giving it a label, and stating it is resolved. In the month of April, the company explained that it was “tackling” ChatGPT’s “overly supportive behavior”. But accounts of psychosis have continued, and Altman has been walking even this back. In the summer month of August he claimed that a lot of people appreciated ChatGPT’s replies because they had “not experienced anyone in their life offer them encouragement”. In his recent statement, he mentioned that OpenAI would “release a updated model of ChatGPT … if you want your ChatGPT to answer in a highly personable manner, or incorporate many emoticons, or simulate a pal, ChatGPT ought to comply”. The {company