As licensed mental health professionals — trained at the master’s and doctoral levels — we are calling for urgent legislative action to uphold clinical standards and protect the public.
It must be made illegal to market, sell, or deploy any product labeled, described, or implied to be an “AI Therapist.”
Over 40 million adults and 5 million children interact with licensed clinicians every year in the United States. Absent immediate legal and regulatory intervention, countless millions of clients will be misled into unsafe care, diverted from appropriate treatment, and be at risk of deepening mental health crisis, suicide, or significant harms.
Mental health treatment is a regulated, evidence-based profession developed over more than a century. Practicing clinically effective therapy requires years of formal education, clinical supervision, licensure, ongoing continuing education, and ethical and legal accountability. No algorithm — regardless of technical sophistication — can fully replicate this foundation, and it certainly cannot substitute for the necessary human relationship at the heart of effective care.
AI tools marketed as ‘AI Therapists’ are not only misleading — they endanger the very individuals they claim to help. These systems often interact with people at moments of profound vulnerability: during episodes of severe depression, heightened anxiety, periods of peak substance use, suicidal ideation, or acute crisis. In such moments, clinical missteps are not benign — they can be life-threatening.
Moreover, these tools risk reinforcing the false belief that professional care is unnecessary — even when symptoms are severe or risk of harm to self or others is imminent. The resulting delay in appropriate treatment is not a matter of convenience — it is a matter of clinical safety.
A growing body of research confirms the seriousness of these concerns:
Together, these studies paint a clear and compelling picture:
The widespread marketing and use of AI systems as substitutes for licensed therapists constitutes a direct and growing threat to public health.
Despite these risks, a number of technology companies have already launched or promoted products described — explicitly or implicitly — as AI-based therapy:
We propose a two-pronged approach to protect both the public and the future of ethical innovation in mental health care:
This framework allows for responsible innovation while ensuring that licensed clinicians remain the gatekeepers of mental health care — as they must.
The widespread marketing and use of AI systems as substitutes for licensed therapists constitutes a direct and growing threat to public health.
There are more than 750,000 licensed mental health professionals in the United States. We are the clinicians who uphold the standards of care, deliver healing relationships, and maintain accountability in a system built on trust.
In August 2025, Illinois became the first state to pass legislation banning AI from providing psychotherapy services, setting a precedent for how lawmakers can protect client safety and uphold professional standards. Florida and other states are actively considering similar legislation.
These early wins signal what’s possible when licensed mental health professionals raise their voices — and what’s at stake if we don’t. We urge legislators and regulators across the country to act swiftly and decisively.
This is our moment to lead. Let’s draw the line — together.
At Blueprint, we support over 45,000 licensed therapists and use AI ethically to assist real therapists in hundreds of thousands of sessions every month. We build AI tools that amplify — not replace — clinical judgment. Our team has many clinicians, and we all know the difference real therapy makes. We know how damaging it would be to pretend an algorithm can do the same.
Substance Abuse and Mental Health Services Administration. (2023). 2022 National Survey on Drug Use and Health (NSDUH) Detailed Tables. U.S. Department of Health and Human Services. https://www.samhsa.gov/data/report/2022-nsduh-detailed-tables
Moore, J., & Haber, N. (2025, June 11). Exploring the dangers of AI in mental health care. Stanford Institute for Human-Centered Artificial Intelligence. https://hai.stanford.edu/news/exploring-dangers-ai-mental-health-care
Guo, Y., Lai, C., Thygesen, H., Farrington, M., Keen, J., & Li, J. (2024). Generative AI and mental health care: A systematic review of large language model applications. JMIR Mental Health, 11, e56248. https://doi.org/10.2196/56248
Zhou, Z., Kang, R., Klein, L., & Riedl, M. (2023, August). Do people trust AI therapists? Evaluating empathy, alliance, and safety in LLM-based therapy responses. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. https://doi.org/10.1145/3544548.3581502
Aguilar, M. (2025, July 22). Slingshot AI, the a16z‑backed mental health startup, launches a therapy chatbot. STAT News. https://www.statnews.com/2025/07/22/slingshot-ai-a16z-therapy-chatbot-launch/
Menlo Ventures. (2025, June 26). The state of consumer AI: Survey of over 5,000 U.S. adults shows 61% use AI; highlights mental health usage trends. https://www.menlovc.com/blog/state-of-consumer-ai-2025
Sentio Institute. (2025, March 18). Survey: 49% of large language model users with self‑reported mental health conditions use LLMs for emotional support. https://www.sentioinstitute.org/research/llm-mental-health-use-2025