AI therapy, once a concept confined to science fiction, is now a tangible reality, presenting complex dilemmas. On one hand, AI offers the potential to democratize access to mental health support. On the other, it raises profound questions about safety, clinical validity, and the very nature of the therapeutic relationship. Some U.S. states, such as California and New Jersey, are introducing bills to regulate AI therapy, following reports of bots giving harmful advice to users.
Undoubtedly, patients are increasingly likely to use or ask their providers about AI-powered mental health tools. To help you navigate the emergence of AI therapy, here are the insights you’ll need to advise patients, evaluate tools and understand the evolving role of technology in mental healthcare.
What is AI therapy and how does it work?
AI therapy refers to mental health support delivered through artificial intelligence platforms. These tools come in various forms, including:
- Chatbots and conversational agents: Chatbots are one of the most prevalent forms of AI. Using natural language processing (NLP), chatbot therapists engage users in text-based conversations, offering a space to articulate feelings and work through problems.
- Virtual therapists: These are more advanced systems that may incorporate an avatar and voice interaction, simulating a more direct therapeutic session.
- AI-Powered mental health apps: These applications (such as Woebot and Wysa) often integrate AI to track mood patterns and deliver personalized content based on established therapeutic models like Cognitive Behavioral Therapy (CBT).
Underpinning these tools are sophisticated technologies like machine learning (ML), which allows the AI to learn from vast datasets of text and conversation to improve its responses over time. NLP enables the AI to interpret and generate human-like text.
Sermo members are already noticing the effects of these tools in clinical practice. In a small sample poll, almost half (48%) of physicians reported that their patients frequently discuss using AI chatbots or digital therapy apps, with another 16% hearing about it occasionally.
Despite their growing popularity, some members remain skeptical of AI therapy. “It may be helpful to a small extent but in my experience with patients having therapy, they prefer a face-to-face with a doctor,” notes one GP. “…the patient would benefit better in the long term with a traditional consultation.”
Others voiced a more fundamental concern. “Lack of social contact is why many are depressed in the first place. Are we now saying we want to move more people away from other people with a robot?” voices a family medicine physician. “In the end, it is the human touch, love and kindness that does the healing,” echoes a GP.
What are the benefits of AI in mental health for patients?
Despite the valid concerns, the potential benefits of AI in mental health are compelling. Many AI therapy applications are built on a foundation of structured, evidence-based interventions, like CBT. These tools promise to extend the reach of mental healthcare in several key ways.
First, AI therapy can increase accessibility, as one review of research notes. Mental healthcare often has significant barriers, including high costs, long waiting lists and geographical isolation. AI therapy tools can provide immediate, on-demand support at a fraction of the cost of traditional therapy, if not for free. This makes support available to individuals who might otherwise have no options. For someone experiencing a moment of crisis in the middle of the night, an AI chatbot can offer an instant listening ear.
The review also notes that AI offers scalability, which is particularly noteworthy amid a global shortage of mental health professionals. Human therapists can only see a limited number of patients, but an AI tool can serve millions of users simultaneously. This ability to scale is crucial for addressing the widespread need for mental health support, especially in underserved populations.Another study notes that AI provides a degree of anonymity that can be a powerful draw for those hesitant to seek help. The stigma surrounding mental illness remains a potent barrier, preventing many from speaking to a human therapist for fear of judgment. Interacting with an AI tool can feel safer and less intimidating.
The clinical validity and safety of AI therapy
While AI therapy has potential benefits, it also raises questions of clinical validity and safety. For mental health experts to recommend any therapeutic tool, it needs to be proven effective and safe. The research on AI therapy’s efficacy is still in its early stages, but growing. One review notes that some studies have shown promising results, but the evidence is far from conclusive across all conditions and platforms.
A lack of a standardized regulatory framework is one major challenge, the review notes. While some AI tools undergo rigorous testing and seek validation through clinical trials, others are released to the public with little to no independent oversight. It may be difficult for patients to distinguish between evidence-based applications and those that are essentially wellness apps with a veneer of clinical language.
Data privacy is another paramount concern. These apps collect incredibly sensitive personal health information. Without security protocols and transparent data usage policies, patients are at risk for privacy breaches or misuse of their data. It falls on physicians to be vigilant in questioning how these companies protect user information and whether their practices comply with regulations like HIPAA.
The Sermo community is aware of these risks. In a small sample poll, an overwhelming majority of physicians called for stronger regulation. Sixty percent of respondents believe “strong oversight is urgently needed” for AI mental health tools, and another 27% agree that “some regulation is needed.”
Should patients and physicians trust AI therapy advice?
It’s questionable that a machine, no matter how sophisticated, can truly earn a patient’s trust. While AI can process language, it cannot truly understand human emotion. It lacks the capacity for genuine empathy, nuanced clinical judgment and the shared human experience that allows therapists to build rapport and navigate complex psychological landscapes.
AI struggles with the subtleties of human communication—the unspoken cues, the cultural context, and the deep-seated emotional currents that a human therapist is trained to perceive. This limitation becomes particularly dangerous in crisis situations. This is illustrated in tragic, high-profile cases where AI has provided harmful or inappropriate responses to users expressing severe distress or suicidal ideation. For example, a lawsuit alleges that a 16-year-old boy died by suicide after ChatGPT encouraged the act, and another case involves a man who killed his mother and himself after his delusions were reportedly fueled by an AI chatbot. Both incidents underscore the dangers of unsupervised AI in sensitive mental health contexts.
A GP on Sermo highlights the ethical implications of an algorithm providing therapeutic advice without human oversight. “I do think there’s a risk that AI chatbots could unintentionally replace human connection for some vulnerable people,” they warn. “…Nothing can truly substitute the empathy and nuance of real human relationships.” An oncologist and Sermo member feels similarly. “AI can perform certain programmatic tasks very well, but it lacks subjective initiative, which is why it cannot replace human labor,” they write. Members see AI as a tool, not a complete replacement for therapists.
Is it okay for doctors to refer patients to AI therapy?
Given the benefits and risks, the question remains whether it’s ever appropriate for physicians to recommend an AI therapy tool. Sermo polling shows that physicians see a potential, but conditional, role for these tools.
When asked about recommending AI tools for patients with mild to moderate anxiety or stress, 68% of physicians found it either “very appropriate” (34%) or “somewhat appropriate with oversight” (34%). However, others are more cautious, with 19% believing it is only appropriate when no other options are available.
If AI tools could be clinically validated and proven safe, the enthusiasm grows. A majority (52%) of physicians said they would be “very likely” to incorporate such tools into their practice, and 28% would be “somewhat likely,” depending on the patient population and the quality of the tool.
Appropriate scenarios for referral might include using the tools as a bridge when patients are on a long waiting list for a human therapist. Or, physicians may recommend tools like CBT-based apps to practice skills between sessions, or to provide low-acuity support to patients with mild stress. In every case, the recommendation should come with clear caveats about the tool’s limitations and an emphasis on maintaining a connection with a human healthcare provider.
Shaping the future of AI in mental healthcare
As AI therapy grows in popularity, the medical community is figuring out how they can incorporate the tools ethically and effectively.
For physicians, this means demanding clinical validation, prioritizing patient safety and understanding which tools have evidence behind them — and which they should guide patients away from. The goal is to harness the accessibility and scalability of AI without sacrificing the profound human connection that lies at the heart of healing.
As this technology develops, platforms that foster peer-to-peer discussion are useful. Sermo provides a space for physicians worldwide to share experiences, debate ethical dilemmas and collectively shape the standards for AI in medicine.


