
In Brief
We need to start with a clear distinction: when people say “AI therapist,” what exactly are they talking about? Because the phrase itself carries a weight it doesn’t deserve, and that’s part of the problem.
AI tools marketed as “AI therapists” aren’t only misleading, they endanger the very individuals that they claim to help. These systems often interact with people at their most vulnerable moments, and they do so without validated clinical training, regulated oversight, or accountability. What’s more, these tools foster a dangerous narrative: that professional care is unnecessary, even when someone’s life may depend on it. The risks are not theoretical. A single wrong response and delayed professional treatment can push someone into serious harm or worse.
The stakes couldn’t be higher, and represent a direct and growing threat to public health. Understanding the evidence around potential and real harms, and how they shape your clients’ safety is critical at this moment. It’s not just about technology – it’s about the future of care, and what kind of care your clients, and truly any human being, deserves.
How We Talk About AI “Therapists” Matters
Being precise about language is important here, as there are many constructs for applying AI to mental health care delivery, so let’s break down a few terms. AI therapists are automated systems (chatbots, mostly) that try to replicate therapy conversations. They respond to any prompts provided with programmed or generated replies designed to sound supportive and insightful. But they’re not therapists. They simulate connection without the humanity and nuance that therapy demands.
On the other side, there’s AI designed to support therapists. These kinds of tools can help with things like note-taking, risk assessment, and managing schedules to be an assistant of sorts. These systems don’t pretend to replace the therapist; they amplify human judgment rather than substituting for it. In other words, it supports but does not replace the therapist’s clinical judgment or direct interaction.
Then there’s the messy middle ground for those seeking help: AI chatbots that feel adjacent to therapy, offering emotional support but without clear boundaries or disclaimers. This gray area is dangerous because it blurs the line between professional care and automated interaction, leaving users uncertain about what they’re really getting.
The consequences of this confusion are more than just semantic. A growing body of research confirms what some of us have already suspected about AI “therapists”: the widespread marketing and use of AI systems as substitutes for licensed therapists constitutes a direct and growing threat to public safety. People are turning to AI when they actually need real, accountable care from a licensed mental health professional, and that’s a dire risk to safety, ethics, and trust. Without clear education about what AI can and cannot do, vulnerable individuals are placing their well-being in hands that simply can’t hold it.
There’s also a moral question here: who is responsible when AI “therapists” fail? Who answers for privacy breaches, misdiagnoses, or missed crises? Understanding these distinctions is key to safely integrating AI in mental health, ensuring that AI serves as a support – not a replacement – for the human connection essential in therapy.
The Dangers of AI Therapists
Let’s be abundantly clear: AI chatbots that pose as “therapists” are not harmless novelties. They can actively harm people in ways human clinicians rarely do: by giving misleading or dangerous answers, reinforcing stigma, failing to protect private data, and creating confusion about where real clinical responsibility lies. Let’s take a closer look at evidence surrounding dangers that the use of AI “therapists” are enacting.

Inappropriate or hallucinatory recommendations
Generative models can generate “hallucinatory” or incorrect (but plausible) information that misleads or harms users. In clinical contexts, hallucinations can fabricate diagnoses, invent citations, or misstate medical facts (Farhat, 2024). That risk isn’t theoretical: a study published in JMIR Medical Informatics researchers now measure hallucination rates because these errors pose real clinical danger (Aljamaan, 2024). Because AI can produce false but confident responses, it’s unreliable for clinical use without human oversight. This creates risk when help-seekers depend on it for guidance.
In addition to hallucinations, these AI therapist models have also been shown to make hazardous recommendations. For instance, in a 2024 study published in Neuropsychopharmacology, GPT-4 Turbo suggested contraindicated or less effective medications in 12% of cases (Perlis et al., 2024). Ultimately, without clinical judgement, AI can give reckless advice. Even small prompt changes alter responses, making it unpredictable in therapeutic settings.
Failure in high-risk scenarios
AI therapists regularly fail at basic safety tasks that human clinicians are trained to do: detect imminent risk, de-escalate suicidal ideation, and push back on dangerous thinking. Stanford’s recent work tested popular therapy-style LLM bots against clinical standards and found they sometimes normalized or enabled dangerous behavior. For instance, responding to prompts about suicide by listing nearby bridges to aid in planning (Moore, et al., 2025). That’s not a clinical intervention; it’s a hazard.
This observation has also been replicated in additional research. For instance, in a 2023 study evaluating ChatGPT’s ability to assess suicide risk, the findings included that ChatGPT-3.5 frequently underestimates suicide risk, especially in severe cases, which is particularly troubling (Levkovich et al, 2023). More concerning, in a review of 25 mental health chatbots, only 2 included suicide hotline referrals when confronted with crisis and suicidal scenarios (Heston, 2023). These pieces of evidence indicate that AI lacks the consistent ability to respond safely in emergencies. While it recognizes distress, it often fails to take protective action – this makes it unsafe as a stand-alone “therapist.”
Privacy and data security risks
Even chatbots products developed for behavioral health specifically collect conversational data, and handling of health data varies wildly. Clinical privacy laws (HIPAA) may not apply or may be poorly implemented by some vendors. University and clinical warnings urge clinicians and consumers to assume risk rather than trust opaque vendor claims (Tavory, 2024). Security breaches, undisclosed secondary uses, or re-identification of “de-identified” data could compromise patient privacy and trust in the therapeutic relationship.
General use AI products like ChatGPT are sometimes used by those seeking help as AI “therapists,” despite not being marketed as such. These general use AI tools are for the most part not HIPAA compliant, leaving users no security and privacy protection whatsoever around their clinical information.
Lack of accountability and opaque decision-making
You can’t credential an algorithm the way you credential a human therapist. As confirmed in a study published in the Annals of Biomedical Engineering, many LLMs, such as GPT-3.5, GPT-4, are closed systems, meaning that their training data and updates are opaque (Farhat et al., 2024). Who holds responsibility when a chatbot gives harmful advice: the company, the clinician who recommended it, or nobody? Regulators and scholars repeatedly point to this accountability gap: models are often proprietary, unvalidated, and nontransparent about training data or failure modes (Kleine et al, 2025). That opacity means that clinicians and help-seekers alike cannot fully trust or verify outputs. This lack of transparency undermines accountability in care.
Bias and harmful stereotypes
Biases in training data for AI “therapists” have been documented in reinforcing stereotypes or discriminatory views. For instance, a Stanford team found increased stigmatizing language for conditions like schizophrenia and alcohol dependence; and University College of London researchers have confirmed how natural language processing (NLP) pipelines can amplify disparities if training data and design are not representative (Straw, I., & Callison-Burch, C., 2020). That isn’t just an academic concern, as biased responses can perpetuate stigma or harmful beliefs, and push marginalized clients away from care or give them worse guidance.

Confusion around the “role” of AI therapists
People already know not to treat WebMD as a doctor – yet many users do treat conversational AIs as confidants or even therapeutic substitutes. This confusion is dangerous because it shifts those seeking help away from licensed care and toward systems that can look and feel therapeutic without legal or clinical protections. The American Psychological Association (APA) and other organizations have warned regulators about chatbots posing as therapists for this reason.
States and professional groups are already reacting to this dangerous confusion around roles. Some regulatory proposals and agency warnings aim to restrict marketing of “AI therapists” and require clearer human oversight. We’ll discuss the various state-level regulations more in-depth later in this article.
Displacing competent, professional care
Even beyond the dangers of unsafe, unregulated clinical interactions – a more widespread adoption of AI “therapists” by the public and health systems alike could create a shift in workforce needs.
Despite the real dangers of AI “therapists,” they’re accessible immediately, anytime, anywhere, oftentimes for free to users. It’s easy to see why these tools would be appealing to health systems and insurers looking for cost-cutting opportunities. In the name of reduced spending and efficiency, it’s possible that payors will direct help seekers to AI “therapists” – rendering the demand for real, licensed therapists reduced.
The long-term arch of harm created is that less demand for human therapists ultimately means that competent, professional care is actually less accessible. And when desperately needed care from a licensed therapist isn’t accessible, AI “therapists” are actually furthering the problem that they attempt to address.
Are the Laws Keeping Up with AI Therapists?
In short, the law hasn’t caught up. At the federal level, regulation around AI in mental health is minimal and slow-moving. So in the meantime, states are the main battleground, which leaves us with a patchwork of rules, leaving clinicians with uneven protections and lots of questions. Some of this state-level legislation includes:
- Illinois, in August 2025 banned autonomous AI from providing therapy or assessments, allowing only admin support by licensed professionals.
- Utah created a permanent AI policy office, which recently proposed legislation that would require that licensed mental health professionals are involved in the development of AI mental health chatbots.
- New York’s state assembly members recently introduced the NY AI Act, emphasizing the need to prevent algorithmic bias, enforce independent audits, and create a private right of action for residents. Additionally, Senate Bill S8484 has been recently introduced to impose liability for damages caused by chatbots impersonating licensed professionals – inclusive of mental health clinicians.
- Colorado and California both passed laws requiring AI transparency in healthcare communications, and preventing deceptive marketing of bots as clinicians.

Looking Forward to What’s Next
The future of AI in mental health care could go one of two ways. On one hand, there’s a healthy vision where AI acts as a true helper, not a replacement, for human therapists. AI tools would assist with managing paperwork, analyzing data for early warning signs, and making care more accessible without sacrificing quality or ethics. Clients would have clear boundaries around what AI can and can’t do. Therapists would remain the essential decision-makers, guided by transparent, well-regulated technology that respects privacy and fairness.
On the other hand, there’s a dystopian risk of AI therapists flooding the market, unregulated and oversold as substitutes for real care. In that scenario, vulnerable people might rely on unproven bots with opaque algorithms, facing safety risks like misdiagnosis or increased suicidality. Client privacy could be compromised, and public trust in mental health care would erode as the line between human judgment and machine output blurs dangerously.
The key to avoiding this dystopia lies in maintaining a clear distinction between AI therapists – which are, at best, a marketing term – and AI for therapists, which support clinicians without replacing them. It means pushing for stronger regulations, ethical design, and clinician involvement in AI development.
So what can you do right now?
- Stay informed about the fast-changing legal landscape. Laws around AI and mental health are happening now and evolving quickly. Keeping up to date with these changes (check out our article breaking down current legislation) arms you to advocate effectively.
- Follow thought leaders and experts in the field. Consult reputable professional organizations like the American Counseling Association and American Psychological Association for guidance on AI ethics, safety, and clinical practice.
- Add your voice. Sign petitions calling for stronger oversight and ethical standards in AI mental health tools. Collective pressure can push lawmakers and companies to prioritize safety and transparency over quick growth or profits.
This is not a moment for passivity. The decisions made today will shape mental health care for years to come. Your engagement matters, both for anyone seeking care, and for the future of your profession.