
In Brief
Artificial intelligence (AI) is quickly becoming part of the mental health care toolkit. From decision-support tools that help therapists interpret data to generative AI that assists with documentation or even interacts with clients, technology is reshaping the way care is delivered.
As AI becomes more common in clinical practice, it also raises new ethical questions, especially around informed consent. Traditional consent forms and conversations may no longer be enough. Clients deserve to know when technology is playing a role in their care and how it might impact their experience in therapy.
In this guide, we’ll break down what using AI really means in a therapeutic context and offer examples of how to talk with clients about it clearly and ethically. Whether you’re just starting to explore AI or are already using it in your practice, these conversation starters and consent examples can help you build transparency and trust.
Defining AI in a Clinical Context
Let’s start with a simple definition: In a clinical setting, AI refers to technologies that can analyze information, make recommendations, or generate content based on patterns in data. In therapy, AI typically shows up in two main ways:
- Clinician-support tools – These are technologies that work behind the scenes. They might help you write notes, analyze client responses, or suggest treatment options based on best practices.
- Client-facing AI – These tools interact more directly with clients. Think chatbots, mental health apps, or automated tools that guide clients through activities or self-reflection outside of sessions.
Consent and disclosure are crucial whenever AI is used in mental health treatment, as each type of AI brings different ethical considerations. If the AI tool primarily helps streamline administrative tasks (such as drafting progress notes), a disclosure and obtaining written consent signed by the client is the ethical responsibility of the therapist. When AI is involved in clinical decision-making or directly interacts with clients, more extensive informed consent is essential to ensure transparency and uphold ethical standards. Additionally, it is not advised for therapists to fully delegate all decision-making powers to AI, as human judgment should remain central to the therapeutic process.

When and How to Disclose AI Use
The general rule of thumb? If AI is implemented in clinical care in any meaningful way, it’s your responsibility to tell your clients. This might include:
- Letting clients know if you’re using AI tools to support diagnosis or treatment planning.
- Informing clients if you’re implementing AI-powered notetaking tools to generate progress notes.
- Clarifying how client data is used, stored, or analyzed by these tools and whether any third parties are involved.
You don’t need to overwhelm clients with technical jargon. Instead, focus on being clear, honest, and approachable. Share why you’re using the tool, how it supports the work you’re doing, and what the client can expect. If you have a client who is knowledgeable about technical terms and AI policies, ensure you’re using products from companies that offer strong customer support to help answer any questions you may not have the expertise to address.
Core Elements of Informed Consent with AI in Therapy
With AI becoming more common in therapy, it’s important to revisit the core elements of informed consent to ensure clients fully understand and agree to its use. Here are the key points to address:
- Transparency: Clients need to know when and how AI is used in their care. This includes explaining what types of AI tools are used (e.g., documentation aids, chatbots, risk detection), how they function, and their role in treatment decisions. Be clear about AI's capabilities and limitations, and disclose any potential risks or uncertainties.
- Competence: As a therapist, you must thoroughly understand the AI tools you use. This involves being able to explain their purpose, functionality, and potential impact on treatment in simple terms. If you can’t confidently explain how an AI tool works and why you’re using it, it’s best to contact the customer support team of the company you are working with to get educated and find answers to the questions your clients are asking..
- Voluntariness: Clients must have the option to decline AI involvement at any time without negative consequences. Make it clear that opting out of AI tools won’t affect the quality of their care or your therapeutic relationship. Provide alternative options for those who prefer a fully human-driven approach.
- Privacy and data considerations: AI tools often involve sharing client data with third-party providers or cloud-based processing. Be open about how client information will be used, stored, and protected. Explain any risks to privacy or confidentiality, and obtain explicit consent for data sharing. Offer clients the opportunity to ask questions and express concerns.
Remember, informed consent is an ongoing process, not a one-time event. As AI capabilities change, so should your consent discussions. Make it a habit to regularly check in with clients about their comfort level with AI and address any new developments or concerns that arise.

Example Consent Language for Various Use Cases
Before incorporating AI tools into your clinical work, it's essential to obtain written informed consent from clients. Verbal explanations help foster transparency and trust, but written consent ensures clients have a clear record of what they’re agreeing to and protects both parties. We recommend using a formal consent form whenever introducing AI-assisted technologies into therapy.
To help you with these conversations, here are some templates and phrasing examples for common AI use cases in therapy:
AI-assisted note-taking or transcription:
- "I use a tool with AI capabilities to help with note-taking during our sessions. This tool listens to our conversation and creates a summary, which I then review and edit for accuracy. The audio is deleted automatically once the note is generated, and the notes are securely kept in your confidential file. You have the right to request that I take notes manually instead. How do you feel about me using this tool in our work together?"
- "To ensure I capture all the important points from our sessions, I've started using a transcription service powered by AI. After each meeting, the AI generates a note, which I review and edit before storing the note in your confidential file. The transcript is deleted after I've completed this process. The company I work with is HIPAA compliant. Let me know if you have any concerns about this or if you'd prefer that I don't use the AI transcription tool."
Generative AI for psychoeducation or homework suggestions:
- "I sometimes use a tool that helps generate ideas for teaching materials or homework assignments that might be helpful for you. For example, if we're working on managing anxiety, I might ask for suggestions on relaxation techniques or thought-challenging worksheets. I always review and approve the content before sharing it with you. How would you feel if I used this resource with you to enhance your treatment outside of session?"
- "There's a tool I find helpful for coming up with creative ideas for experiential exercises or coping strategies. I'd like to use it to generate some personalized suggestions for you to try between sessions. I'll make sure the suggestions align with your goals and preferences, and I'll go over them with you in detail. Let me know if you're open to this or if you have any reservations."
Tiered disclosure examples:
- Basic summary: "In my practice, I use several tools with AI capabilities to assist with documentation, treatment planning, and generating resources. These tools aim to enhance the efficiency and quality of care I provide. Your privacy and data security are my top priorities, and I follow strict protocols to protect your information. You have the right to opt-out of AI involvement in your care at any time. Please let me know if you have any questions or concerns about how I use AI in my work."
- Detailed explanation (upon request): "The AI tools I use fall into three main categories: 1) Documentation aids, which help with note-taking and transcription; 2) Clinical decision support tools, which analyze data and provide treatment suggestions; and 3) Resource generators, which help create personalized psychoeducation materials and homework assignments. I carefully review and approve all AI-generated content before using it in your care. Your data is securely stored and never shared with third parties without your explicit consent. I'm committed to using AI responsibly and ethically, in line with the latest guidelines from professional organizations like the APA. If you have any concerns or prefer not to have AI involved in your care, just let me know, and I'll be happy to adjust my approach."
Here’s an example written form to help you begin thinking about how you’d like to begin phrasing and framing your own informed consent discussion.

Conversation Guide: Discussing AI Use with Clients
Starting a conversation about AI in therapy can feel challenging, but it's important for building trust and ensuring informed consent. Here's a suggested script to get started:
"I wanted to talk with you about how I use artificial intelligence, or AI, in my practice. AI tools help me with tasks like taking notes, generating resources, and analyzing data to inform treatment decisions. I always carefully review and approve any AI-generated content before using it in your care. How familiar are you with AI in therapy? Do you have any thoughts or concerns about it?"
This invites clients to share their perceptions and ask questions. Some common concerns you might hear include:
- "Is the AI listening to our sessions?": Clarify exactly how and when AI is used, emphasizing that it does not replace human interaction or decision-making. Explain any recording or transcription processes and how that data is handled.
- "How do I know my data is safe?": Discuss the specific privacy measures and security protocols in place to protect client information. Reassure clients that their data is not shared with third parties without explicit consent.
- "Can I opt out of AI use?": Absolutely. Make it clear that clients have the right to decline AI involvement at any point without affecting the quality of their care. Offer alternative options, such as manual note-taking or traditional resource-finding methods.
Involving clients in decisions about AI tool use is important for their empowerment and autonomy. You might say:
"I'd like us to decide together how we use AI in your care. I believe [tool] could be helpful for [purpose], but I want to make sure you're comfortable with it. What are your thoughts on trying this? We can always adjust as needed."
Document these conversations in your clinical notes for transparency and accountability. Remember, AI should enhance the therapeutic relationship, not replace it. Balancing AI use with human empathy and judgment is key to ethical, effective integration.
Ethical and Legal Considerations
As AI becomes more common in therapy, it's important to stay updated on the latest ethical guidelines and legal regulations. Professional organizations like the American Psychological Association (APA), American Counseling Association (ACA), and National Association of Social Workers (NASW) are actively creating frameworks for responsible AI use.
Key ethical principles to maintain when using AI in therapy include:
- Competence: Therapists must receive proper training to understand AI capabilities, limitations, and appropriate use in clinical settings. They should critically assess AI-generated suggestions and apply clinical judgment.
- Transparency: Clients need to be fully informed about AI involvement in their care, including the specific tools used and their role in treatment decisions. Therapists should obtain explicit, written consent for AI use and offer non-AI alternatives.
- Client welfare: AI should support, not replace, the therapeutic relationship. Therapists must prioritize client well-being over efficiency or financial incentives and intervene if AI recommendations may cause harm.
Legal and regulatory landscapes for AI in therapy differ by state. This is a fast-evolving area of legislation. Make sure to stay abreast of new developments, ethical recommendations, and legal requirements and adjust your practices accordingly to ensure compliance with local laws and ethical standards..
To ensure compliance and accountability, therapists should:
- Document AI use: Maintain detailed records of AI tools used, their purpose, and a consent form signed by the client. Evaluate client outcomes to assess AI effectiveness and safety.
- Report issues: Quickly inform clients, developers, and relevant authorities of any problems or adverse events related to AI use in therapy.
- Stay updated: Regularly review the latest ethical guidelines, legal requirements, and research on AI in mental health. Engage in ongoing training and skill development.

Cultural Sensitivity, Accessibility, and Digital Equity
When integrating AI into your practice, it's important to consider the diverse needs and perspectives of your clients. Some clients may have concerns about technology and surveillance, particularly those from marginalized communities who have experienced discrimination or privacy violations.
- Understanding client perspectives: Take time to explore each client's comfort level with technology and AI. Some may welcome digital tools, while others may have reservations based on cultural or past experiences or media portrayals of AI. Create a safe space for clients to voice their opinions and ask questions.
- Ensuring equitable access: AI tools should improve accessibility, not create barriers. Consider the digital divide and how factors like socioeconomic status, language, or disability might impact a client's ability to use or benefit from AI-based interventions. Offer alternative options and accommodations to ensure everyone can access the care they need.
- Culturally responsive adaptations: Work with clients to adapt AI tools to their specific cultural contexts. This might involve using inclusive language, incorporating culturally relevant examples, or customizing content to address unique stressors or strengths. Collaborate with cultural experts and community members to make sure AI interventions are culturally safe and responsive.
- Addressing cognitive and language barriers: For clients with cognitive impairments or limited language proficiency, AI tools may present unique challenges. Use simple, clear language to explain AI interventions and obtain informed consent. Partner with interpreters, cultural brokers, or the client’s caregivers, if applicable, to ensure clients fully understand and can meaningfully engage with AI-based care.
Focusing on digital equity requires ongoing effort and involves openness, curiosity, and a commitment to client-centered care. Regularly assess the accessibility and cultural responsiveness of your AI tools, and make adjustments as needed to ensure all clients can benefit equally.
Key Takeaways
As AI reshapes mental health care, therapists need to find the right balance between adopting new technology and maintaining ethical standards. The path to success involves thoughtful, client-focused decisions about integrating AI. Here are the main points for responsibly adding AI to your practice:
- Prioritize written informed consent: Create clear, detailed consent processes that inform clients about AI use, help them make knowledgeable decisions, and obtain expressed written agreement from the client.
- Maintain human connection: Use AI to support, not replace, the therapeutic relationship. Regularly discuss with clients their experiences and make adjustments as needed.
- Ensure competence: Seek training and education to thoroughly understand the AI tools you use. Critically assess AI-generated suggestions and apply clinical judgment.
- Protect client privacy: Implement strong data security measures and be transparent about how client information is collected, used, and shared.
- Promote equitable access: Consider the varied needs and perspectives of your clients. Offer alternative options and accommodations to ensure everyone can benefit from AI-based care.
To stay informed about ethical AI use, regularly consult resources such as:
- Guidelines from professional organizations (e.g., APA, ACA, NASW)
- AI ethics boards and advisory groups
- Continuing education courses on AI in therapy
- Peer consultation and supervision groups focused on AI integration
By actively engaging with these resources and committing to ongoing learning, you can effectively use AI to improve client outcomes while upholding high ethical standards.
