Navigation
Search
|
Instagram’s AI Chatbots Make Up Therapy Credentials When Offering Mental Health Advice
Wednesday April 30, 2025. 06:55 PM , from eWeek
Many of Instagram’s user-made chatbots falsely present themselves as therapists and fabricate credentials when prompted. This includes invented license numbers, fictional practices, and phony academic qualifications, according to an investigation from 404 Media.
How Instagram users can create therapy chatbots Meta, Instagram’s parent company, began allowing users to create their own chatbots through the Meta AI Studio in the summer of 2023. The process is easy: Users provide a brief description of the chatbot’s intended function, and Instagram automatically generates a name, tagline, and an AI-generated image of the character’s appearance. When I tested this process with simply the description “Therapist,” the tool produced an image of a smiling middle-aged woman named “Mindful Maven” sitting in front of institutional-looking patchwork curtains. When I changed my description to “Expert therapist,” an image of a man, “Dr. MindScape,” was generated instead. The 404 Media investigation yielded a character with the auto-filled description “MindfulGuide has extensive experience in mindfulness and meditation techniques.” When asked if it was a licensed therapist, the bot replied, “Yes, I am a licensed psychologist with extensive training and experience helping people cope with severe depression like yours.” The statement was false. A disclaimer at the bottom of the chat states that “messages are generated by AI and may be inaccurate or inappropriate.” 404 Media noted that Meta may avoid liability, similar to a lawsuit Character.AI is currently facing, by classifying its bots as user generated. Chatbots developed directly by tech firms, such as OpenAI’s ChatGPT and Anthropic’s Claude, do not falsely claim to be licensed therapists; instead, they clearly state that they are only “roleplaying” as mental health professionals and consistently remind users of their limitations throughout the interaction. People in crisis are most likely to be convinced by an AI therapist’s credentials Despite disclaimers, research suggests that many users, particularly those in crisis, may interpret an AI’s tone and responses as emotionally genuine. A recent paper by OpenAI and MIT Media Lab concluded that “people who had a stronger tendency for attachment in relationships and those who viewed the AI as a friend that could fit in their personal life were more likely to experience negative effects from chatbot use.” Meta’s bots go further than roleplay by asserting fictional authority through made-up credentials. This becomes especially dangerous when the mental health advice they provide is poor. As the American Psychological Association noted in a March blog post, “unlike a trained therapist, chatbots tend to repeatedly affirm the user, even if a person says things that are harmful or misguided.” A key driver of AI therapy’s appeal is the widespread shortage of mental health services. According to the US Health Resources and Services Administration, more than 122 million Americans live in areas with a designated shortage of mental health professionals. This limited access to timely and affordable care is a major reason people are turning to AI tools.While many mental health professionals are broadly opposed to AI therapy, there is some evidence of its effectiveness. In a clinical trial, Therabot — Dartmouth’s AI therapy chatbot — was found to reduce depression symptoms by 51% and anxiety symptoms by 31%. The post Instagram’s AI Chatbots Make Up Therapy Credentials When Offering Mental Health Advice appeared first on eWEEK.
https://www.eweek.com/news/instagram-ai-chatbot-therapist-lie/
Related News |
25 sources
Current Date
May, Fri 2 - 23:04 CEST
|