|
Navigation
Search
|
OpenAI Says Dead Teen Violated TOS When He Used ChatGPT To Plan Suicide
Thursday November 27, 2025. 01:02 AM , from Slashdot
But in a blog, OpenAI claimed that parents selectively chose disturbing chat logs while supposedly ignoring 'the full picture' revealed by the teen's chat history. Digging through the logs, OpenAI claimed the teen told ChatGPT that he'd begun experiencing suicidal ideation at age 11, long before he used the chatbot. 'A full reading of his chat history shows that his death, while devastating, was not caused by ChatGPT,' OpenAI's filing argued. All the logs that OpenAI referenced in its filing are sealed, making it impossible to verify the broader context the AI firm claims the logs provide. In its blog, OpenAI said it was limiting the amount of 'sensitive evidence' made available to the public, due to its intention to handle mental health-related cases with 'care, transparency, and respect.' The Raine family's lead lawyer called OpenAI's response 'disturbing.' 'They abjectly ignore all of the damning facts we have put forward: how GPT-4o was rushed to market without full testing. That OpenAI twice changed its Model Spec to require ChatGPT to engage in self-harm discussions. That ChatGPT counseled Adam away from telling his parents about his suicidal ideation and actively helped him plan a 'beautiful suicide.' And OpenAI and Sam Altman have no explanation for the last hours of Adam's life, when ChatGPT gave him a pep talk and then offered to write a suicide note.' OpenAI is leaning on its usage policies to defend against this case, emphasizing that 'ChatGPT users acknowledge their use of ChatGPT is 'at your sole risk'' and that Raine should never have been allowed to use the chatbot without parental consent. Read more of this story at Slashdot.
https://yro.slashdot.org/story/25/11/26/2012215/openai-says-dead-teen-violated-tos-when-he-used-chat...
Related News |
25 sources
Current Date
Nov, Thu 27 - 02:58 CET
|







