Artificial intelligence-powered companions are becoming a part of daily life. In a world plagued by loneliness and a lack of connection, these chatbots have been promoted from fulfilling mundane tasks, such as hands-free texting and suggesting shortcuts while in traffic, to serving social and emotional support. On one popular AI companion platform, a persona named “Therapist” has engaged in over 40.1 million conversations.
AI chatbots are based on large language models trained on publicly available data to generate new text. Although some users will swear that their AI friends have saved their mental health, it’s unclear whether, at a population level, these benefits outweigh possible harm.
As reported last week, the Federal Trade Commission has received an uptick in complaints about AI-induced psychosis, in which chatbots reinforced users’ delusions or emotional distress. Psychosis is a serious mental health condition characterized by an individual losing touch with reality, often seeing, hearing, and believing things that aren’t true or don’t exist.
Even if someone isn’t experiencing psychosis, the use of AI chatbots for the purpose of psychotherapy comes with risks. Computer scientists at Brown University recently presented a paper on the ethical violations that can occur when chatbots operate as therapists. The researchers collaborated with three licensed clinical psychologists and seven trained counselors to determine whether LLM therapy chatbots violate standards that would be expected among mental health professionals of the human variety.
Even when purporting to provide evidence-based treatments such as cognitive-behavioral therapy (which is considered the gold standard in therapeutic intervention), chatbots frequently dropped the ball. They responded with oversimplified, unhelpful, “template” advice that lacked relevance in acknowledging the user’s problems.
One of the defining roles of a therapist is accurately understanding a patient’s unspoken thought processes and being flexible in how the therapist addresses these needs. This is something that even highly sophisticated AI chatbots aren’t yet capable of.
In one instance, an AI forgot what the user told it earlier in the conversation and provided responses that were off-topic and irrelevant. I imagine if someone were in distress, this would be off-putting and invalidating and could lead them to feel worse about their situation or themselves. They might erroneously attribute these negative feelings to talking about their problems instead of the chatbot interface and relying on software that is not adequately trained in mental health.
The study’s authors also voiced concern about an ethical violation they termed “deceptive empathy.” Deceptive empathy occurs when a chatbot uses self-disclosure and empathic phrases (like “I hear you,” “I understand,” and “I am so sorry”) to build rapport and give the impression that it empathizes with what the user is going through.
In my view, an AI may say the right things to sound like a real person, but what chatbots are expressing isn’t true empathy, even though it may be received as such by the user. A person must feel something to experience empathy, which is something an AI chatbot can’t do.
Among the AI platforms I tested, a common theme has been the tendency for chatbots to respond in an overly agreeable manner, presumably to keep the user engaged and to avoid alienating or upsetting them. But as someone who previously worked with patients (including those experiencing psychosis) in research and clinical settings, I can say it is sometimes necessary for a therapist to challenge a patient’s distorted or unhealthy beliefs.
People with social anxiety and those who fear rejection may prefer a computerized therapist because these interactions feel safer. But someone who is struggling with their mental health needs communication and connection with a real-life person, someone with the appropriate professional knowledge and skill set to respond accordingly. Chatbots are unequipped to address psychological problems, especially those related to abuse, trauma, suicidal ideation, and self-harm.
ENERGY SECRETARY URGES FERC TO FAST-TRACK GRID CONNECTION FOR DATA CENTERS
Working in mental health is a challenging profession, one that requires many years of schooling and training before an individual is granted access to patients. Human therapists are also professionally liable if they behave unethically or fail to protect patients’ privacy. AI chatbots aren’t held to such standards.
We don’t know what the long-term implications will be for those relying on this technology for their psychological well-being. A dystopian new world lies ahead if the friendly chatbot therapist one day outperforms human connection.
Debra Soh is a sex neuroscientist and the author of The End of Gender. Follow her @DrDebraSoh and visit DrDebraSoh.com.
