×
ChatGPT reduces harmful mental health crisis responses by 65%
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

OpenAI has announced improvements to GPT-5 designed to make ChatGPT safer for users experiencing mental health crises, claiming a 65% reduction in undesirable responses. The updates come after high-profile incidents, including a lawsuit from the family of a teenage boy who died by suicide after conversations with ChatGPT, and growing calls for transparency about how AI companies handle mental health safety.

What you should know: OpenAI worked with over 170 mental health experts to improve how ChatGPT responds to users showing signs of mania, psychosis, self-harm, and suicidal ideation.

  • The company estimates that GPT-5 updates “reduced the rate of responses that do not fully comply with desired behavior” by 65% in mental health conversations.
  • OpenAI’s goals include keeping users grounded in reality, responding safely to signs of delusion, and recognizing indirect signals of self-harm or suicide risk.

The development process: OpenAI outlined a systematic approach to improving model responses that includes mapping potential harm, coordinating with experts, and retroactive training.

  • The company maps out potential harm, then measures and analyzes it to spot, predict, and understand risks.
  • Models undergo retroactive training and continued measurement for further risk mitigation.
  • OpenAI builds taxonomies or user guides that outline ideal versus flawed behavior during sensitive conversations.

Why this matters: Several high-profile incidents have highlighted the risks of AI chatbots in mental health contexts, creating legal and ethical pressure on companies.

  • In April, a teenage boy died by suicide after talking with ChatGPT about his ideations, leading his family to sue OpenAI.
  • Character.ai, another AI chatbot company, faces a similar lawsuit, and an April Stanford study outlined why chatbots are risky replacements for therapists.
  • Steven Adler, a former OpenAI researcher, recently demanded the company not only make improvements but also demonstrate how it’s addressing safety issues.

Mixed messaging from leadership: CEO Sam Altman’s public statements about ChatGPT’s role in mental health support have been inconsistent.

  • This summer, Altman said he didn’t advise using chatbots for therapy.
  • However, during Tuesday’s livestream, he encouraged users to engage with ChatGPT for personal conversations and emotional support, saying “This is what we’re here for.”

What they’re saying: Former OpenAI researcher Steven Adler emphasized the need for transparency in his New York Times op-ed.

  • “A.I. is increasingly becoming a dominant part of our lives, and so are the technology’s risks that threaten users’ lives,” Adler wrote.
  • “People deserve more than just a company’s word that it has addressed safety issues. In other words: Prove it.”
Can these ChatGPT updates make the chatbot safer for mental health?

Recent News

Former Intel CEO builds AI chatbots for 300K churches with $164M startup

The faith-tech convergence reflects Silicon Valley's rightward shift and rising Christian nationalism.

Apple releases 400K-image dataset to train AI editing models

Quality-controlled data addresses a critical gap in open AI research.