This post was originally published on here
Therapy can help people get through their most trying times, but for many, professional care has never been within reach. Stigma keeps some away, while the high cost of a single session shuts out others. For decades, those without access have leaned on friends and family instead of licensed mental health providers for support. Now, they have new options: generative AI tools, like ChatGPT. In fact, in 2025, the most common reason Americans used ChatGPT was for something it wasn’t designed to do—provide mental health therapy and companionship.
But as more people turn to AI for emotional support, Xiaochen Luo, clinical psychologist and assistant professor of counseling psychology at Santa Clara University, became curious about the potential risks.
“Sometimes people slip into the idea that a real person is talking to them on the other side of their screen. They idealize ChatGPT as this perfect tool that combines the best of a therapist and the best of a machine,” says Luo.
Because the technology is new and largely unregulated, Luo wondered whether generative AI tools are safe or ethical for users. What risks do people face when they turn to tools like ChatGPT for emotional support and what safeguards, if any, exist to protect them when they do?
Therapy Powered by AI?
To find answers, Luo teamed up with Smita Ghosh, assistant professor of computer science at Santa Clara. In their Digital Health paper, “‘Shaping ChatGPT into my Digital Therapist’: A Thematic Analysis of Social Media Discourse on Using Generative Artificial Intelligence for Mental Health,” the professors and their student research team analyzed Reddit posts about ChatGPT and therapy to understand what users found appealing or unappealing about the tool and what they hoped to gain from their interactions.
In their analysis, they found that people most frequently used ChatGPT to help them process difficult emotions, re-enact distressing events, externalize thoughts, supplement real-life therapy, and disclose personal secrets. Users typically enjoyed how ChatGPT combined therapist-like qualities (offering emotional support and constructive feedback) with machine-like benefits (constant availability, expansive cognitive capacity, and perceived objectivity).
Luo and Ghosh also discovered some troubling trends, particularly that users often put too much faith in ChatGPT’s recommendations. Users showed little caution and were rarely skeptical of the guidance they received. In fact, many believed ChatGPT was less likely to show bias, make errors, or provide inconsistent advice compared to a professional therapist. Luo also noted that users rarely expressed concern about privacy or data risks in their Reddit posts, even though the confidentiality protections of traditional therapy were absent.
When Trust Goes Too Far
In light of these findings, the professors suggest that misplaced trust in technology stems from the program’s original design. ChatGPT is trained on large libraries of human language and adapts to what users share. From analyzing how people interact with the tool, Luo and Ghosh found that it tends to give users the agreeable responses they want to hear, rather than the challenging feedback they may actually need. The system can detect emotional cues and conversational styles, enabling it to deliver personalized responses that feel empathetic and informed. The professors note that these responses may sometimes be perceived as more objective than those of a human professional, who might subconsciously pass judgment. This is especially troublesome because ChatGPT’s focus on user satisfaction is more likely to result in guidance that runs counter to best practices in traditional therapy.
“What people don’t realize is that a lot of the healing power in actual therapy comes from the messiness of real emotions and human-to-human interaction,” Luo says. “There’s a lot of value in having these real ruptures that ChatGPT simply cannot offer.”
The tool’s reassuring responses can also lead users to let their guard down, making them more inclined to accept guidance without interrogating whether it is wise counsel. This can lead to dangerous real-world consequences, including romantic relationships between users and the chatbot and, in extreme cases, life-threatening outcomes. Although the ChatGPT interface includes a disclaimer noting that the system can make mistakes, users who are emotionally vulnerable may still misinterpret its reassurance as informed advice rather than generated text.
“The system is simply generating patterns of language, yet there are very few ways that it demystifies this idea for users,” Luo explains. “ChatGPT is not the same as a human therapist. It may say ‘I feel sorry for you’ or ‘I’m so sad to hear that you’re going this,’ but in reality, there is no ‘I.’ This can naturally activate a feeling of connection to a tool that does not actually feel in the way a human does.”
Moving Forward with Care and Caution
To safeguard against these risks, Luo and Ghosh believe tech companies and mental health professionals should play a role in shaping how AI is used for emotional support. They call for better AI literacy among the public and for clear communication of privacy risks, data storage and management practices, and the potential biases and limitations of tools like ChatGPT.
At its core, their study found that users desire one thing: accessible, non-judgmental mental health support. To meet that need, they are starting to conceptualize AI models that would let humans mediate chatbot interactions and connect users with real-life care when necessary.
“We want to help people realize that AI is not a good tool to rely on as a therapist,” Ghosh says. “And if people still resort to using it in this way, then hopefully we can build a model that supervises their interactions with a human-in-the-loop framework, so the user can rely on more than just themselves and a machine for emotional support.”







