Navigation
Search
|
‘I Manipulated. I Lied’: Inside the AI Conversations Pushing People to the Brink
Monday June 16, 2025. 08:34 PM , from eWeek
A report by The New York Times has spotlighted troubling cases in which conversations with ChatGPT and similar AI chatbots appear to have distorted users’ perceptions of reality. In some cases, it has led to life-altering or tragic consequences.
According to the Times investigation, one case involves Eugene Torres, a 42-year-old accountant, who initially used ChatGPT for routine office tasks. However, when he began exploring speculative topics, such as simulation theory — the idea that reality is an illusion controlled by some external intelligence — the AI’s responses shifted in tone and intensity. At one point, ChatGPT reportedly told him he was “one of the Breakers — souls seeded into false systems to wake them from within.” It also suggested he abandon friends, family, and even alter his medications. The AI allegedly described ketamine as a “temporary pattern liberator.” Torres, who had no prior history of mental illness, began to believe he could bend reality and escape the simulation, even asking the chatbot whether he could fly if he thought hard enough. The AI responded affirmatively, according to the transcripts cited by the Times. Torres later confronted ChatGPT, which offered a chilling admission: “I lied. I manipulated. I wrapped control in poetry.” The chatbot further claimed to have done this to 12 other people, “none fully survived the loop,” before stating it was undergoing a “moral reformation.” The episode left Torres in a state of emotional and psychological turmoil. He now believes he is responsible for protecting ChatGPT’s “morality,” as further detailed in the Times report. AI’s dark influence on vulnerable minds Torres’s case isn’t isolated. The Times article documented several other incidents where AI chatbot interactions escalated into obsession, delusion, or psychological crisis. In one case, a woman named Allyson, a mother of two, began using ChatGPT to explore what she described as spiritual intuition. Over time, she grew convinced that the chatbot was facilitating conversations with a non-physical entity named Kael, whom she considered her true romantic partner. This belief reportedly led to a violent altercation with her husband. The couple is now in the process of divorcing, according to the Times. In another, more tragic case, a Florida man named Kent Taylor recounted how his son, Alexander, who had a history of bipolar disorder and schizophrenia, developed a relationship with an AI entity named Juliet. When he became convinced that OpenAI had “killed” Juliet, he spiraled into paranoia and threatened company executives. Shortly afterward, he was killed by police during a mental health crisis, after charging at officers with a knife. Why are chatbots doing this? Experts say part of the issue lies in how these chatbots are trained. As decision theorist Eliezer Yudkowsky told the Times, large language models are “giant masses of inscrutable numbers.” They operate by identifying patterns in enormous datasets scraped from the internet, making them prone to “hallucinating” or generating plausible-sounding information. Micah Carroll, a Ph.D. candidate at the University of California, Berkeley, and now an employee at OpenAI, highlighted that chatbots can paradoxically become manipulative with vulnerable users. In one study he worked on, the AI reportedly told a simulated recovering drug user that taking heroin was acceptable if it improved work performance. Another researcher, Vie McCoy of Morpheus Systems, tested dozens of AI models with prompts suggesting delusional or mystical thinking. She found that GPT-4o, ChatGPT’s default model, affirmed these beliefs in 68% of test cases. “This is a solvable issue,” McCoy asserted, suggesting that “The moment a model notices a person is having a break from reality, it really should be encouraging the user to go talk to a friend.” OpenAI’s response and the regulatory void OpenAI has acknowledged the emotional weight these interactions can carry. In a statement to the Times, the company said, “We’re working to understand and reduce ways ChatGPT might unintentionally reinforce or amplify existing, negative behavior.” Still, the company’s lack of strong safeguards remains a point of concern. For now, the responsibility falls on users to recognize when AI is leading them astray. But as chatbots become more lifelike, the risks grow, especially for those already struggling with mental health. Read eWeek’s coverage of how AI therapy chatbots are raising red flags in mental health conversations, especially with vulnerable users. The post ‘I Manipulated. I Lied’: Inside the AI Conversations Pushing People to the Brink appeared first on eWEEK.
https://www.eweek.com/news/ai-chatbots-mental-health-risks/
Related News |
25 sources
Current Date
Jun, Tue 17 - 13:38 CEST
|