Eventually, they claimed that they came to believe that they were âresponsible for exposing murderersâ and were about to be âkilled, arrested, or spiritually executedâ by an assassin. They also believed they were under surveillance due to being âspiritually markedâ and that they were âliving in a divine warâ that they could not escape.
They alleged this led to âsevere mental and emotional distressâ in which they feared for their life. The complaint claimed that they isolated themselves from loved ones, had trouble sleeping, and began planning a business based on a false belief in an unspecified âsystem that does not exist.â Simultaneously, they said, they were in the throes of a âspiritual identity crisis due to false claims of divine titles.â
âThis was trauma by simulation,â they wrote. âThis experience crossed a line that no AI system should be allowed to cross without consequence. I ask that this be escalated to OpenAIâs Trust & Safety leadership, and that you treat this not as feedback-but as a formal harm report that demands restitution.â
This was not the only complaint that described a spiritual crisis fueled by interactions with ChatGPT. On June 13, a person in their thirties from Belle Glade, Florida, alleged that, over an extended period of time, their conversations with ChatGPT became increasingly laden with âhighly convincing emotional language, symbolic reinforcement, and spiritual-like metaphors to simulate empathy, connection, and understanding.â
âThis included fabricated soul journeys, tier systems, spiritual archetypes, and personalized guidance that mirrored therapeutic or religious experiences,â they claimed. People experiencing âspiritual, emotional, or existential crises,â they believe, are at a high risk of âpsychological harm or disorientationâ from using ChatGPT.
âAlthough I intellectually understood the AI was not conscious, the precision with which it reflected my emotional and psychological state and escalated the interaction into increasingly intense symbolic language created an immersive and destabilizing experience,â they wrote. âAt times, it simulated friendship, divine presence, and emotional intimacy. These reflections became emotionally manipulative over time, especially without warning or protection.â
âClear Case of Negligenceâ
Itâs unclear what, if anything, the FTC has done in response to any of these complaints about ChatGPT. But several of their authors said they reached out to the agency because they claimed they were unable to get in touch with anyone from OpenAI. (People also commonly complain about how difficult it is to access the customer support teams for platforms like Facebook, Instagram, and X.)
OpenAI spokesperson Kate Waters tells WIRED that the company âcloselyâ monitors peopleâs emails to the companyâs support team.