
In Brief
Artificial intelligence (AI) is increasingly becoming part of mental health practice. For therapists, this means new opportunities to improve care and streamline workflows, but also new ethical questions and responsibilities. To help answer some of these questions, professional organizations have been releasing guidance to help clinicians navigate AI’s evolving role in therapy. Notably, the American Psychological Association (APA) released their Ethical Guidance for AI in the Professional Practice of Health Service psychology in June 2025, and The American Counseling Association (ACA) has released their Recommendations For Practicing Counselors And Their Use Of AI.
Both of the guidelines share core values like transparency, client welfare, and professional accountability, but understanding where they align and where they differ can help you use AI tools thoughtfully, balancing innovation with ethics in your practice.
Highlights from the ACA and APA Recommendations
Both the ACA and the APA see AI as a tool that can improve client care, streamline workflow, and expand access when used with strong human oversight. They agree that practitioners remain fully responsible for clinical decisions and must follow ethical codes. Both stress transparency with clients, protection of sensitive data, and active efforts to reduce bias in AI tools. Ethical principles like beneficence (promoting well-being), nonmaleficence (avoiding harm), integrity (honesty and consistency), and respect for rights (protecting autonomy and dignity) guide both sets of recommendations.
Benefits of Therapists Using AI in Clinical Settings
Let’s start with the promise of when AI is applied to a therapeutic practice. Across both organizations’ recommendations, it is recognized that AI can:
- Support diagnostic and treatment decisions through data analysis.
- Improve efficiency in administrative tasks like documentation, scheduling, and billing.
- Highlight client trends or risks that may not be immediately visible.
- Expand access to care in underserved areas via telehealth and automated tools.
- Assist with continuing education by summarizing research and evidence-based practices.
In short, when integrated carefully, AI is a powerful tool that can free you from repetitive tasks, giving you more time for direct clinical work. It can also act as a second set of eyes on treatment progress, flagging subtle shifts in symptoms or engagement.

Ethical Considerations and Potential Risks of Using AI in Your Practice
With any new technology, there are new safety concerns to examine. Across both the APA and ACA guidance, if you’re thinking about adding AI tools to your practice, there are a few guardrails as to how AI tools are to be applied, including:
- Transparency & informed consent: Telling clients when and how AI is used, in plain, culturally appropriate language. Offer opt-out options and explain alternatives.
- Bias & equity: Checking training datasets and vendor testing processes for representation and fairness. Be aware that some tools can reinforce discrimination. Always review content and output generated from AI for cultural sensitivity. Avoid tools that reinforce discrimination.
- Data privacy & security: Only using HIPAA-compliant systems. Understand data storage, encryption, and vendor policies.
- Accuracy & reliability: Using AI that has been validated with sound evidence. Be aware that some tools may produce unreliable results, such as hallucinations (for example, making up content in a note that didn’t happen in the session). Only use tools that have positive reputations among peers and strong customer service teams that respond to issues quickly
Human oversight: Ensuring that you, not the AI, make all final clinical decisions. - Liability: Poor oversight or reliance on unverified AI can lead to ethical and legal consequences. Stay updated on professional guidelines and laws.
AI can extend your reach, but it also raises high-stakes risks. The APA and ACA are clear: if you integrate AI, you carry responsibility for its outcomes. A misdiagnosis, a biased recommendation, or a privacy failure doesn’t fall on the software—it falls on you.
The danger is not just clinical missteps, but erosion of trust if clients sense they’re being treated by an algorithm rather than a clinician. Overreliance can dull your judgment; underattention to bias can reinforce inequities you may be working to dismantle. In short: AI can be a helpful partner, but it’s never a shield. The clinician’s ethical, legal, and relational accountability remains central, and ignoring that puts both you and your clients at risk.
How to Apply These Recommendations in Your Clinical Workflow
With the APA and ACA’s common guidelines for ensuring safety when using AI tools, let’s take a look at how those recommendations can be applied in your practice. Consider:
- Adding clear AI use statements to your informed consent process.
- Regularly reviewing your AI tools for bias, accuracy, and compliance.
- Building review checkpoints into your workflow before acting on AI-generated outputs.
- Documenting AI use and decision-making in client records.
- Staying current with AI ethics training and emerging regulations.
- Advocating for better vendor transparency and ethical development standards.
Overall, it’s important to make sure that client consent around AI starts from the beginning: if you use an AI-assisted intake form, introduce AI any tools you use to your new client and review any flagged risks yourself before deciding on next steps. And if you work with an AI tool that provides clinical guidance, be sure to document both the AI’s assessment and your reasoning for agreeing or disagreeing with it.

Where ACA and APA Guidance Differs
While the ACA and APA share a core ethical stance, they differ in emphasis:
- ACA focuses more on practical, client-facing practice, such as clear communication in informed consent, cultural responsiveness in AI use, and vigilance about bias and equity, always underscoring that AI must serve the counseling relationship.
- APA also addresses transparency and privacy while placing greater weight on scientific rigor–calling for validation, accuracy, and methodological reliability before AI tools are adopted into practice.
- The ACA frames AI primarily as a counseling aid with strong emphasis on relationship, cultural care, and trust. The APA frames AI as a clinical-scientific tool that must meet high evidentiary standards before being deployed.
For counselors specifically, these differences can shape how you select tools and document decisions. Following both sets of recommendations provides a balanced approach that protects client trust and upholds scientific integrity.
Key Takeaways
AI in therapy is no longer a novelty. It’s part of the infrastructure of mental health care, quietly shaping the way you write notes, set reminders, flag risks, and organize caseloads. But its benefits aren’t automatic. They depend entirely on how you, as the human in the room, choose to integrate them.
Both the ACA and APA have been clear: AI should support your work, not substitute for your judgment. That’s not a sentimental argument about the necessity of human empathy – it’s a practical one. A client’s trust is built on your accountability, your ethics, and your professional responsibility. No algorithm can carry that weight.
However, the hazards are real and worth examining. Bias embedded in training data can deepen inequities. If the tool is unprotected and not HIPAA compliant, gaps in data security can expose client information to breaches. Without transparency, clients may not know where their therapist’s voice ends and a machine begins.
The good news is you always have agency here. You can integrate AI responsibly by:
- Being transparent about when and how it’s used in your work.
- Auditing its output for accuracy and bias.
- Keeping your professional expertise as the final decision-maker.
- Choosing tools from companies that meet high standards for privacy and ethics.
The technology will keep evolving, and so will the professional guidelines. Staying informed isn’t optional; it’s part of maintaining competence in your role. The pace will be uncomfortable. There will be hype cycles and failures. But there will also be genuinely useful advances: tools that make your notes sharper, your risk assessments faster, and your time with clients less encumbered by administrative drag.
If you keep your hand on the wheel, AI can be an assistant to your work, not a threat. And if you stay alert to its risks, you’ll not only protect your clients, you’ll also shape the next phase of therapy into one where the human relationship remains the foundation.
