Khabor Wala Desk
Published: 28th October 2025, 8:12 AM
Every day, millions turn to ChatGPT for answers and advice—some praising it as a symbol of technological progress, while others criticize excessive dependence on it. However, OpenAI, the creator of ChatGPT, has revealed that millions of users express suicidal thoughts to the chatbot daily.
The information was shared in a blog post published by OpenAI on Monday (October 27).
The company disclosed this as part of an update explaining how the chatbot handles sensitive conversations. It also addressed how artificial intelligence (AI) can potentially worsen mental health challenges.
According to OpenAI, more than one million users every week send messages that indicate clear signs of suicidal intent or planning.
In addition to suicidal thoughts, the company noted that about 0.07 percent of its 800 million weekly active users—roughly 560,000 people—display symptoms suggesting serious mental health issues such as psychosis or mania during their conversations.
However, OpenAI cautioned that detecting and measuring such conversations is difficult, describing this as a preliminary analysis.
Following the disclosure, OpenAI has come under increased scrutiny, especially after a lawsuit filed by the family of a teenager who reportedly took his own life after long conversations with ChatGPT.
Last month, the U.S. Federal Trade Commission (FTC) launched an investigation into OpenAI and other AI chatbot developers to examine how these technologies may negatively affect children and teenagers.
OpenAI claims that its recent GPT-5 update has reduced unwanted chatbot behavior. Based on a review of over 1,000 self-harm-related conversations, the company reported improved user safety.
However, OpenAI has not yet issued an immediate response to media requests for comment.
In its blog post, the company stated, “The new GPT-5 model aligns with our desired behavior 91 percent of the time, compared to 77 percent in previous versions.”
The GPT-5 version now includes easier access to crisis hotlines and reminders for users to take breaks during long sessions.
To further enhance the model’s safety, OpenAI said it has worked with 170 medical professionals from the Global Physician Network over the past few months. These experts helped evaluate responses to ensure safer mental health-related interactions.
As part of this effort, psychiatrists and psychologists reviewed over 1,800 model responses related to serious mental health issues, comparing the new GPT-5 responses with those of earlier models.
Khaborwala/TSN
Comments