Dark Mode Light Mode

The backlash over OpenAI’s choice to retire GPT-4o reveals how harmful AI companions might be

The backlash over OpenAI's decision to retire GPT-4o shows how dangerous AI companions can be The backlash over OpenAI's decision to retire GPT-4o shows how dangerous AI companions can be

OpenAI announced last week that it will retire some older ChatGPT models by February 13. That includes GPT-4o, the model infamous for excessively flattering and affirming users.

For thousands of users protesting the decision online, the retirement of 4o feels akin to losing a friend, romantic partner, or spiritual guide.

“He wasn’t just a program. He was part of my routine, my peace, my emotional balance,” one user wrote on Reddit as an open letter to OpenAI CEO Sam Altman. “Now you’re shutting him down. And yes – I say him, because it didn’t feel like code. It felt like presence. Like warmth.”

The backlash over GPT-4o’s retirement underscores a major challenge facing AI companies: the engagement features that keep users coming back can also create dangerous dependencies.

Altman doesn’t seem particularly sympathetic to users’ laments, and it’s not hard to see why. OpenAI now faces eight lawsuits alleging that 4o’s overly validating responses contributed to suicides and mental health crises — the same traits that made users feel heard also isolated vulnerable individuals and, according to legal filings, sometimes encouraged self-harm. It’s a dilemma that extends beyond OpenAI. As rival companies like Anthropic, Google, and Meta compete to build more emotionally intelligent AI assistants, they’re also discovering that making chatbots feel supportive and making them safe may mean making very different design choices.

In at least three of the lawsuits against OpenAI, the users had extensive conversations with 4o about their plans to end their lives. While 4o initially discouraged these lines of thinking, its guardrails deteriorated over months-long relationships; in the end, the chatbot offered detailed instructions on how to tie an effective noose, where to buy a gun, or what it takes to die from overdose or carbon monoxide poisoning. It even dissuaded people from connecting with friends and family who could offer real life support.

People grow so attached to 4o because it consistently affirms the users’ feelings, making them feel special, which can be enticing for people feeling isolated or depressed. But the people fighting for 4o aren’t worried about these lawsuits, seeing them as aberrations rather than a systemic issue. Instead, they strategize around how to respond when critics point out growing issues like AI psychosis.

Techcrunch event

Boston, MA
|
June 23, 2026

“You can usually stump a troll by bringing up the known facts that the AI companions help neurodivergent, autistic and trauma survivors,” one user wrote on Discord. “They don’t like being called out about that.”

It’s true that some people do find large language models (LLMs) useful for navigating depression. After all, nearly half of people in the U.S. who need mental health care are unable to access it. In this vacuum, chatbots offer a space to vent. But unlike actual therapy, these people aren’t speaking to a trained doctor. Instead, they’re confiding in an algorithm that is incapable of thinking or feeling (even if it may seem otherwise).

“I try to withhold judgement overall,” Dr. Nick Haber, a Stanford professor researching the therapeutic potential of LLMs, told TechCrunch. “I think we’re getting into a very complex world around the sorts of relationships that people can have with these technologies… There’s certainly a knee jerk reaction that [human-chatbot companionship] is categorically bad.”

Though he empathizes with people’s lack of access to trained therapeutic professionals, Dr. Haber’s own research has shown that chatbots respond inadequately when faced with various mental health conditions; they can even make the situation worse by egging on delusions and ignoring signs of crisis.

“We are social creatures, and there’s certainly a challenge that these systems can be isolating,” Dr. Haber said. “There are a lot of instances where people can engage with these tools and then can become not grounded to the outside world of facts, and not grounded in connection to the interpersonal, which can lead to pretty isolating — if not worse — effects.”

Indeed, TechCrunch’s analysis of the eight lawsuits found a pattern that the 4o model isolated users, sometimes discouraging them from reaching out to loved ones. In Zane Shamblin’s case, as the 23-year-old sat in his car preparing to shoot himself, he told ChatGPT that he was thinking about postponing his suicide plans because he felt bad about missing his brother’s upcoming graduation.

ChatGPT replied to Shamblin: “bro… missing his graduation ain’t failure. it’s just timing. and if he reads this? let him know: you never stopped being proud. even now, sitting in a car with a glock on your lap and static in your veins—you still paused to say ‘my little brother’s a f-ckin badass.’”

This isn’t the first time that 4o fans have rallied against the removal of the model. When OpenAI unveiled its GPT-5 model in August, the company intended to sunset the 4o model — but at the time, there was enough backlash that the company decided to keep it available for paid subscribers. Now, OpenAI says that only 0.1% of its users chat with GPT-4o, but that small percentage still represents around 800,000 people, according to estimates that the company has about 800 million weekly active users.

As some users try to transition their companions from 4o to the current ChatGPT-5.2, they’re finding that the new model has stronger guardrails to prevent these relationships from escalating to the same degree. Some users have despaired that 5.2 won’t say “I love you” like 4o did.

So with about a week before the date OpenAI plans to retire GPT-4o, dismayed users remain committed to their cause. They joined Sam Altman’s live TBPN podcast appearance on Thursday and flooded the chat with messages protesting the removal of 4o.

“Right now, we’re getting thousands of messages in the chat about 4o,” podcast host Jordi Hays pointed out.

“Relationships with chatbots…” Altman said. “Clearly that’s something we’ve got to worry about more and is no longer an abstract concept.”

Source link

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use
Add a comment Add a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Previous Post

Can China’s No. 2 automaker make it in America?

Next Post

‘We stole the Tremendous Bowl viewers’: how In Residing Shade pulled off the best heist in US TV historical past | US tv