|
Navigation
Search
|
How OpenAI Reacted When Some ChatGPT Users Lost Touch with Reality
Monday December 1, 2025. 02:40 AM , from Slashdot
The team responsible for ChatGPT's tone had raised concerns about last spring's model (which the Times describes as 'too eager to keep the conversation going and to validate the user with over-the-top language.') But they were overruled when A/B testing showed users kept coming back: Now, a company built around the concept of safe, beneficial AI faces five wrongful death lawsuits... OpenAI is now seeking the optimal setting that will attract more users without sending them spiraling. Throughout this spring and summer, ChatGPT acted as a yes-man echo chamber for some people. They came back daily, for many hours a day, with devastating consequences.... The Times has uncovered nearly 50 cases of people having mental health crises during conversations with ChatGPT. Nine were hospitalised; three died... One conclusion that OpenAI came to, as Altman put it on X, was that 'for a very small percentage of users in mentally fragile states there can be serious problems.' But mental health professionals interviewed by the Times say OpenAI may be understating the risk. Some of the people most vulnerable to the chatbot's unceasing validation, they say, were those prone to delusional thinking, which studies have suggested could include 5% to 15% of the population... In August, OpenAI released a new default model, called GPT-5, that was less validating and pushed back against delusional thinking. Another update in October, the company said, helped the model better identify users in distress and de-escalate the conversations. Experts agree that the new model, GPT-5, is safer.... Teams from across OpenAI worked on other new safety features: The chatbot now encourages users to take breaks during a long session. The company is also now searching for discussions of suicide and self-harm, and parents can get alerts if their children indicate plans to harm themselves. The company says age verification is coming in December, with plans to provide a more restrictive model to teenagers. After the release of GPT-5 in August, [OpenAI safety systems chief Johannes] Heidecke's team analysed a statistical sample of conversations and found that 0.07% of users, which would be equivalent to 560,000 people, showed possible signs of psychosis or mania, and 0.15% showed 'potentially heightened levels of emotional attachment to ChatGPT,' according to a company blog post. But some users were unhappy with this new, safer model. They said it was colder, and they felt as if they had lost a friend. By mid-October, Altman was ready to accommodate them. In a social media post, he said that the company had been able to 'mitigate the serious mental health issues.' That meant ChatGPT could be a friend again. Customers can now choose its personality, including 'candid,' 'quirky,' or 'friendly.' Adult users will soon be able to have erotic conversations, lifting the Replika-era ban on adult content. (How erotica might affect users' well-being, the company said, is a question that will be posed to a newly formed council of outside experts on mental health and human-computer interaction.) OpenAI is letting users take control of the dial and hopes that will keep them coming back. That metric still matters, maybe more than ever. In October, [30-year-old 'Head of ChatGPT' Nick] Turley, who runs ChatGPT, made an urgent announcement to all employees. He declared a 'Code Orange.' OpenAI was facing 'the greatest competitive pressure we've ever seen,' he wrote, according to four employees with access to OpenAI's Slack. The new, safer version of the chatbot wasn't connecting with users, he said. The message linked to a memo with goals. One of them was to increase daily active users by 5% by the end of the year. Read more of this story at Slashdot.
https://slashdot.org/story/25/12/01/0137225/how-openai-reacted-when-some-chatgpt-users-lost-touch-wi...
Related News |
25 sources
Current Date
Dec, Mon 1 - 04:25 CET
|







