Ban AI “Therapists”: Protect Clients from Serious Harm and Safeguard the Integrity of Mental Health Care

To: U.S. Lawmakers, State Legislators, Insurance Regulators, and Health Policy Leaders
From: Licensed Mental Health Professionals Across the United States

🚫

Prohibit the Marketing and Sale of “AI Therapist” Products

As licensed mental health professionals — trained at the master’s and doctoral levels — we are calling for urgent legislative action to uphold clinical standards and protect the public.

It must be made illegal to market, sell, or deploy any product labeled, described, or implied to be an “AI Therapist.”

Over 40 million adults and 5 million children interact with licensed clinicians every year in the United States. Absent immediate legal and regulatory intervention, countless millions of clients will be misled into unsafe care, diverted from appropriate treatment, and be at risk of deepening mental health crisis, suicide, or significant harms.

Mental health treatment is a regulated, evidence-based profession developed over more than a century. Practicing clinically effective therapy requires years of formal education, clinical supervision, licensure, ongoing continuing education, and ethical and legal accountability. No algorithm — regardless of technical sophistication — can fully replicate this foundation, and it certainly cannot substitute for the necessary human relationship at the heart of effective care.

🧠

AI ‘Therapists’ Are Dangerous – the Evidence is Clear

AI tools marketed as ‘AI Therapists’ are not only misleading — they endanger the very individuals they claim to help. These systems often interact with people at moments of profound vulnerability: during episodes of severe depression, heightened anxiety, periods of peak substance use, suicidal ideation, or acute crisis. In such moments, clinical missteps are not benign — they can be life-threatening.

Moreover, these tools risk reinforcing the false belief that professional care is unnecessary — even when symptoms are severe or risk of harm to self or others is imminent. The resulting delay in appropriate treatment is not a matter of convenience — it is a matter of clinical safety.

A growing body of research confirms the seriousness of these concerns:

  • A 2025 Stanford University study, led by Jared Moore and Nick Haber from the Institute for Human-Centered Artificial Intelligence (HAI) evaluated the clinical safety of AI therapy chatbots currently marketed to consumers — including Pi (7cups), Noni, and Character.ai’s “Therapist” bot. The researchers found these systems frequently produced stigmatizing, emotionally inappropriate, and clinically unsafe responses, particularly in scenarios involving trauma, suicidality, and delusions. https://stanford.io/3UFCeHn
  • A 2024 University College London systematic review published in JMIR Mental Health by Guo, Lai, Thygesen, Farrington, Keen, and Li evaluated the current landscape of large language models (LLMs) applied in mental health care. Reviewing dozens of use cases across AI “therapy” tools, the authors concluded these systems routinely face challenges related to hallucinations, bias, and deeply troubled interpretations of diagnosis. The review found no evidence of clinical effectiveness, and warned that AI therapy applications therefore pose a significant risk to patients/clients when used without licensed clinician oversight. https://jmir.org/2024/mentalhealth/LLM-review
  • A 2023 MIT & Yale University study conducted by Zhou, Kang, Klein, and Riedl — explored how users perceived empathy, effectiveness, and trust in interactions with AI-generated therapeutic responses. The study found that participants consistently reported higher levels of emotional safety, perceived empathy, and therapeutic alliance when interacting with responses generated by licensed human therapists. Decades of research validate high therapeutic alliance as one the most important elements necessary to alleviate depression, anxiety, and other acute diagnoses. https://arxiv.org/abs/2306.01872

Together, these studies paint a clear and compelling picture:

The widespread marketing and use of AI systems as substitutes for licensed therapists constitutes a direct and growing threat to public health.

⚠️

A Rapid Commercial Expansion Without Clinical Oversight

Despite these risks, a number of technology companies have already launched or promoted products described — explicitly or implicitly — as AI-based therapy:

  • Big Tech platforms including Meta, OpenAI (ChatGPT), and xAI (Grok) now feature AI personas labeled or role-playing as “therapist” or “counselor.” These bots are easily accessible to the public, including adolescents and other high-risk populations. Despite being unlicensed and unregulated, they often engage users in moments of deep vulnerability — with no direct professional supervision, formal safeguards, or compliance with clinical standards.
  • Startups such as Slingshot, Abby, Earkick, Noni, Pi, and others have gone further — directly marketing products labeled “AI Therapists” or offering “AI Therapy” to consumers. These tools are not supported by licensing bodies, peer-reviewed efficacy studies, or accountable clinical governance — yet are widely available across app stores, search engines, and social media.
  • Combined, these AI “Therapists” and therapist-labeled personas are projected to reach tens of millions of vulnerable users globally by the end of 2025 — with little to no regulation, no required disclaimers, and no compliance guardrails. Their continued spread represents a direct threat to public safety and to the integrity of mental health care itself.

🛡️

The Policy Response Must Be Decisive and Clear

We propose a two-pronged approach to protect both the public and the future of ethical innovation in mental health care:

  1. Ban the Commercialization of “AI Therapist” Products

    Enact legislation at the federal and state level to prohibit the sale, marketing, or deployment of any AI system advertised as an autonomous provider of psychotherapy, counseling, or mental health care. As with the unlawful practice of medicine or psychology without a license, this practice must be explicitly barred.
  2. Regulate All Client-Facing Mental Health AI Under Licensed Oversight

    Any AI product intended for use in clinical mental health contexts must be:

    • Activated, controlled, and supervised by a licensed mental health professional
    • Ineligible for autonomous interaction with clients outside of a clinician-led framework
    • Subject to the same legal, ethical and safety standards as any other tool used in patient care

This framework allows for responsible innovation while ensuring that licensed clinicians remain the gatekeepers of mental health care — as they must.

The widespread marketing and use of AI systems as substitutes for licensed therapists constitutes a direct and growing threat to public health.

💪

We Are Over 750,000 Strong — And Early Wins Are Creating Momentum

There are more than 750,000 licensed mental health professionals in the United States. We are the clinicians who uphold the standards of care, deliver healing relationships, and maintain accountability in a system built on trust.

In August 2025, Illinois became the first state to pass legislation banning AI from providing psychotherapy services, setting a precedent for how lawmakers can protect client safety and uphold professional standards. Florida and other states are actively considering similar legislation.

These early wins signal what’s possible when licensed mental health professionals raise their voices — and what’s at stake if we don’t. We urge legislators and regulators across the country to act swiftly and decisively.

✍️

We Call For:

  1. A federal and state ban on marketing or selling any AI product as a “therapist,” “counselor,” “psychologist,” “clinical social worker,” “psychiatrist,” “psychotherapist,” or any equivalent title, descriptor, or implication suggesting the product can independently provide mental health care.
  2. Mandatory licensed clinician oversight for all client-facing mental health AI — requiring that any AI used in clinical contexts be activated, controlled, and supervised by a licensed mental health professional, with no autonomous therapeutic interaction permitted.

What You Can Do

  1. Sign the Petition: Add your name to call for a ban on AI “Therapists” and client-facing AI tool oversight.
  2. Join the Between Sessions Community: Be part of the growing coalition organizing next steps — including advocacy, education, and regulatory action.
Join the Community
This is our moment to lead. Let’s draw the line — together.
💙  About Blueprint, the Sponsor of this Petition:

At Blueprint, we support over 45,000 licensed therapists and use AI ethically to assist real therapists in hundreds of thousands of sessions every month. We build AI tools that amplify — not replace — clinical judgment. Our team has many clinicians, and we all know the difference real therapy makes. We know how damaging it would be to pretend an algorithm can do the same.

References

Substance Abuse and Mental Health Services Administration. (2023). 2022 National Survey on Drug Use and Health (NSDUH) Detailed Tables. U.S. Department of Health and Human Services. https://www.samhsa.gov/data/report/2022-nsduh-detailed-tables

Moore, J., & Haber, N. (2025, June 11). Exploring the dangers of AI in mental health care. Stanford Institute for Human-Centered Artificial Intelligence. https://hai.stanford.edu/news/exploring-dangers-ai-mental-health-care

Guo, Y., Lai, C., Thygesen, H., Farrington, M., Keen, J., & Li, J. (2024). Generative AI and mental health care: A systematic review of large language model applications. JMIR Mental Health, 11, e56248. https://doi.org/10.2196/56248

Zhou, Z., Kang, R., Klein, L., & Riedl, M. (2023, August). Do people trust AI therapists? Evaluating empathy, alliance, and safety in LLM-based therapy responses. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. https://doi.org/10.1145/3544548.3581502

Aguilar, M. (2025, July 22). Slingshot AI, the a16z‑backed mental health startup, launches a therapy chatbot. STAT News. https://www.statnews.com/2025/07/22/slingshot-ai-a16z-therapy-chatbot-launch/

Menlo Ventures. (2025, June 26). The state of consumer AI: Survey of over 5,000 U.S. adults shows 61% use AI; highlights mental health usage trends. https://www.menlovc.com/blog/state-of-consumer-ai-2025

Sentio Institute. (2025, March 18). Survey: 49% of large language model users with self‑reported mental health conditions use LLMs for emotional support. https://www.sentioinstitute.org/research/llm-mental-health-use-2025

✍️  Sign the Petition
Not in US?
20 verified signatures
Thank you for your signature
and commitment! 🎉
Want to do more to win this fight?
Join the Therapist Community
Share the petition
Share on Facebook
Share on LinkedIn
Share on X
Copy link to email
Oops! Something went wrong while submitting the form.