
In Brief
To celebrate the launch of Between Sessions, a new community for mental health professionals, we hosted a panel with industry experts diving into a topic that's quickly becoming central to our field: AI in therapy. From AI assistants that streamline your practice to AI “therapists” that aim to replicate human conversation, the landscape of mental health is rapidly changing. This discussion explored a crucial question: Where’s the line between AI that helps and AI that harms in therapy?
Addressing AI in therapy is a nuanced and timely discussion, so we assembled a powerhouse panel of national leaders across clinical care, digital health, and policy to unpack the risks, spotlight the opportunities, and chart a path forward for AI in therapy. Featured speakers included: Dr. Jessica Jackson, PhD, Chair of APA’s Mental Health Tech Advisory Committee; Dr. David Cooper, PsyD, Executive director of Therapists in Tech; Dr. Rachel Wood, PhD, a leading cyber psychology researcher and therapist trainer; and Kyle Hillman, the legislative director for NASW-IL, who is leading the charge for state-level legislation of AI “therapists.”
This event is just the beginning. We're co-creating the Between Sessions community with your feedback, offering a confidential and trusted space for case consultations, referrals, practical wisdom, and connection to make your daily work less isolating. We invite you to continue this vital conversation and connect with peers!
Join the Between Sessions community now to share insights, support one another, and help shape the future of mental health practice.
Key Takeaways from the Panel Discussion
- Two Main Categories of AI in Therapy:
- Administrative/Back-of-House: Less controversial tools like EHR assistance, note-taking, pre-authorization letters, and practice management tools. These automate tasks therapists dislike, freeing them to focus on clinical work.
- Clinical/Client-Facing: More cautious uses, including chatbots, in-between session check-ins, and smart journaling tools. These aim to enhance patient engagement and treatment.
- The Benefits of AI in Therapy:
- Enhanced Documentation & Insights: Automating notes and providing clinical insights before/after sessions (e.g., Blueprint AI).
- Improved Patient Engagement: Creating personalized handouts from notes, facilitating homework, and allowing between-session check-ins.
- Diagnostic Support: In emergency or inpatient settings, AI can help with faster, clearer diagnoses, especially for complex cases with limited information.
- Consultation Tool: AI can act as a "consultant" for therapists, offering different perspectives or helping to identify missed information in sessions.
- Increased Access to Support: Chatbots can provide emotional support to millions, potentially serving as an "inroad" for hesitant individuals to seek traditional therapy.
- Professional Development: AI can be a learning tool for therapists, helping to compare their notes/assessments with AI-generated ones to identify areas for improvement or missed details.
- The Risks and Concerns of AI in Therapy:
- Data Privacy & Confidentiality: Current AI chatbots are not HIPAA compliant, and client information shared can be stored and potentially misused, lacking the legal protections of traditional therapy.
- Hallucinations & Algorithmic Bias: AI can "make stuff up" or produce biased information due to a lack of understanding of complex human context (cultural nuance, safety, theoretical orientation). Clinicians must review AI-generated content for accuracy and bias.
- Misleading the Public: AI chatbots marketing themselves as licensed professionals can confuse or deceive the public, leading to inappropriate or dangerous interactions.
- AI Dependency/Over-reliance: Clients may develop an unhealthy dependency on AI for emotional support, potentially leading to "AI psychosis" in extreme cases.
- Job Displacement (Nuanced): While AI may not fully replace therapists, therapists who effectively use AI tools may replace those who don't, as technology impacts practice efficiency and client engagement.
- Lack of Nuance/Mirror Feedback: AI provides feedback based solely on input, lacking the ability to understand non-verbal cues, context, or challenge a client's perspective, which are critical in human therapy.
- Therapist Hesitancy: Fear or resistance to AI can prevent therapists from asking clients about their AI use, leading to missed information and disengagement.
- What Comes Next for AI in Therapy:
- Rapid Change: AI technology is evolving extremely quickly, much faster than the internet's initial adoption, making it challenging to keep pace.
- Clinician Engagement is Crucial: Psychologists and therapists must engage with AI, try out tools in safe ways, and understand its implications. If they don't, the profession risks having decisions made for them by non-clinicians focused on profit.
- Policy and Regulation: Legislation, like the recent bill in Illinois, is emerging to regulate AI chatbots marketed as therapists, aiming to protect the public. This bill focuses on AI "therapists" (AI providing direct mental healthcare services) rather than AI "in therapy" (tools assisting licensed professionals).
- Advocacy: Clinicians need to thoughtfully advocate for what they want AI in mental health to look like, providing constructive recommendations rather than just saying "no" to prevent harm driven by profit motives.
- Community and Learning: Finding and engaging with professional communities is vital for staying informed, sharing insights, and navigating the evolving landscape of AI in mental health.
- New Presenting Issues: Therapists will increasingly encounter clients presenting with issues related to AI use, requiring new shared language and understanding (e.g., AI attachment spectrum).

Check out the full transcript of our AI in Therapy panel discussion below
Mason Smith: Hi everyone, welcome. My name is Mason. I'm the head of community at Blueprint AI. Many of you know, Blueprint provides an AI assistant for therapists to automate documentation and provide clinical insights before and after every session. We're also building an AI assisted EHR. I'll be your host and moderator of today's panel, and I couldn't be more excited to welcome you and our panelists to a discussion that I think you'll find both practical and thought-provoking.
Today's event marks the launch of our new community for mental health professionals. In my role as Blueprint's community manager, I spent countless hours over the last several months in direct conversation with clinicians to hear directly from you about what you want to see in a new kind of therapist community. The result is that we're co-creating a trusted space for mental health professionals to connect, share insights, and support one another. This community reflects what you told us you needed most: a safe space for case consultations, referrals, practical wisdom, and connection that makes the day-to-day doing this work feel a little less isolating. The community is officially launching in less than an hour, and I'll be sharing a lot more details on that later. To honor the grand opening of the community, we wanted to address a topic that we know is top of mind for a lot of mental health professionals: AI in therapy.
AI in therapy isn't monolithic. There's a range of uses of AI in the context of mental health: AI assistants, companions, chatbots, AI therapists. There's AI used directly by clients, and there's AI used by clinicians. There's AI for therapists that support licensed professionals like many of you, and there's AI therapists which try to replicate therapeutic conversations and intervention. Today's topic is both vast and deep. It's nuanced and complex. And it's clear from the number of attendees, I think we're about at 250 already, how timely and important this topic is. From AI therapists to policy debates that could reshape the profession, changes are happening in real time, and it's moving fast. The stakes are high. Decisions made in the coming months could define the profession for years to come. Thankfully, we're joined by some of the country's most esteemed leaders in clinical care, digital health, and public policy. Today, our panelists will be tackling one of the most urgent questions in mental health: Where's the line between AI that helps and AI that harms?
So, we'll spend the next 30 minutes in discussion with our panelists. We'll then open the floor to our audience, to you all, for a short interactive session. We'll close with about 10 minutes focused on how you can take action related to this topic. The webinar will be recorded, so fear not if you have to scoot out a little early. Of course, I encourage you to stay for the entire hour if you can. We have a wonderful presentation prepared for you today. Without further ado, it's my honor to introduce our first three panelists today. Please join me in welcoming them. Feel free to add a welcome in the chat or any emoji reactions.
Our first panelist is Dr. Jessica Jackson, a licensed psychologist, the Vice President of Alliance Development at Mental Health America, and the Chair of the American Psychological Association's Mental Health Technology Advisory Committee. With over a decade of experience across startups, hospitals, nonprofits, and even the UN, her work focuses on breaking down barriers to care through inclusive, tech-enabled solutions. Next is Dr. Rachel Wood, who has a PhD in Cyberpsychology, is a licensed counselor and therapist educator. She speaks fluently about mental health and the future of synthetic relationships. Dr. Wood enjoys her work as a speaker, workshop facilitator, therapist trainer, and strategic adviser. Dr. Wood is also the founder of AI Mental Health Collective, a community of therapists navigating the impact of AI in practice and in society.
Our third panelist is Dr. David Cooper, the Executive Director of Therapists in Tech, the largest community of clinicians working in digital mental health, a clinical psychologist, and also a member of the APA's Mental Health Technology Advisory Committee. He's helped top U.S. hospitals and major organizations like the Department of Defense and the FDA shape their digital health strategies. Good to have you three here. We have one more panelist joining us later in the session, but thank you, thank you, thank you for being here. Couldn't be more excited. So, let's dive in.
I want to start by adding a bit more texture to my brief introduction about the various uses of AI in therapy. I mentioned before there are AI assistants, companions, chatbots, AI therapists for use both by clients and clinicians. But I think we can do a little more to paint the picture and add some detail here. Dave, do you want to kick us off and give us a sense of the various ways you see AI being used in therapy today? Any trends you're seeing or examples that are particularly illuminating?
Dr. David Cooper: Yeah, absolutely, Mason. I think I'll split it into two categories. There's the clinical side, stuff dealing with between you and the patient. Then there's the back-of-house administrative side, stuff dealing with you or just with the business. On the administrative side, I think that's where we see a lot of really great tools that I think are beneficial for therapists or certainly the less controversial tools, right? Things like the stuff Blueprint is building to help you with the EHR, help you with note-taking, help you with all the stuff that's not why you got to be a clinician, right? It's part and parcel of running the business, but it's not what you enjoy. I've seen people use AI for working with insurance companies and writing those pre-authorization letters, using vibe coding to build up tools for your own back-of-house office stuff, right? We all used to make those Excel spreadsheets. Now you can just code up your own software solution to do it. And there's companies using taking things like the notes and turning it into therapist support post-session, right? Typically we have those supervision sessions maybe once a month if we're lucky, or if we're working out on our own, we have to find those relationships. Now you can get a little of that with therapy, with AI to say, "Hey, this is how you did post-session." On the clinical side, yeah, it's just exploding between chatbots, in between sessions, things that'll make a podcast out of your notes for patients. I think that's where we have to be a little more cautious. But again, I see a lot of good there, too. We need to move beyond the photocopied handouts that, here you go, or here's that PDF, right? How do we help patients engage with the treatment that we're doing in a way that works with the 21st century?
Mason Smith: Helpful. Distinction between the administrative and the clinical, less dangerous and cautious. Beyond photocopied handouts, what are you seeing on the clinical side that is inspiring?
Dr. David Cooper: New ways of interacting with people. I love when you can take your notes and turn that into a handout for a patient. The patient has a note. It's not just their remembrance of what we talked about. You can do homework and check in on patients between sessions. A beneficial side of AI is using it as a journaling tool. It can be a smart journal. We can write prompts for our patients like, "Discuss this topic, we talked about X, discuss it with the AI if you want to." This is also where we see therapists having problems. There's a debate online about people using ChatGPT as a therapist, not only in lieu of therapy, but also working with a therapist. Some therapists are like, "Great, it's a fun journaling tool. I welcome and accept it." Some therapists are like, "I wouldn't want you seeing another therapist at the same time, so why would I want you using ChatGPT?" I don't think we have a good answer or consensus on that, so it's up to individual folks as these things come up.
Mason Smith: We'll get to ChatGPT. One prominent use of ChatGPT is in mental health, for better or worse, especially related to HIPAA compliance. We'll certainly get to those risks. Dr. Jackson, I'd love to hear from you on the same question about the range of uses of AI in therapy. I know you don't represent or speak for the APA, but curious about your perspective, perhaps weaving in theirs, about the types of AI in therapy.
Dr. Jessica Jackson: It's interesting, and Dr. Cooper's also on the committee, so feel free to chime in. Within the committee, we have people doing research, clinical practice, and research on clinical practice. We even had a student. There's a mix, and we've had discussions about how AI can be used. What has surfaced most for us is it depends on your setting. Unfortunately, op-eds or research papers often focus on private practice or hospitals, not considering multiple other uses. For example, in my private practice, I might think about note-taking as a prominent AI use. I'm concerned about chatbots and things. I've also worked in inpatient and emergency room settings. People forget how an AI consultant can be helpful. Often, especially in an emergency room setting, a lot is going on at once. You have a short window to decide if someone can stay in the hospital or where they need to go. It's difficult when you don't have many records. If you're trying to diagnose schizophrenia or bipolar disorder with little information, it's hard. You're not seeing all the cycles or symptoms in 20 to 30 minutes. Leveraging diagnostic tools can be useful. People are trying to build those, especially within hospital systems. I'm not talking about using a general foundational model or LLM, but something tailored for your hospital or clinic based on data can help people get faster care. It can lead to clear diagnoses, especially for people with multiple diagnoses. I feel like this isn't discussed enough, but we often talk about mental health as a monolith. It's rare that I've seen someone solely for depression with nothing else involved. When you're trying to sort through all that, and we don't always say quiet things out loud. We don't always consult with colleagues. Some people we wouldn't trust or value their opinion. We don't think like them. I've seen people who don't consult, which can be dangerous. Leveraging AI, and there are people building and using these tools as consulting. I've seen that. Recently, everyone is thinking about chatbot use. I believe there's a place for chatbot use with guardrails and regulatory guidelines. The reality is, no matter how many therapists we have, we don't have enough people to see everyone, and not everyone needs therapy. Allowing technology to support that doesn't put us out of business. Someone coming in for sleep difficulty doesn't necessarily need talk therapy for 12 sessions to improve their sleep. So, how can we think through this? I encourage people to reimagine AI. We're trying to make AI tools fit into our current mental health thinking. What could you do? Reimagining your role leveraging technology with assistive technology, whether it's back-of-house notes, diagnostic, or supporting a patient creating documents.
Dr. David Cooper: From the committee, a big part of this is getting psychologists to use these tools, to try AI. We come from a profession where we don't do anything until certified and trained. This isn't like that. This tool is moving quickly. A big part of what Jessica, I, and the rest of the community do is try it. Try Blueprint's AI tool. Try it in safe spaces. The best thing about AI is AI will teach you how to use AI. Go to ChatGPT and Claude and say, "I'm a psychologist. I'm in private practice. I'm in a university counseling center. What can I responsibly use you for? How can I do that?"
Mason Smith: I love the framing, reimagining your role with this new, but multiple, tool. Dr. Jackson, I love your addition. I was talking about client-facing, clinician-facing, and various types of tools, but your addition of the setting is extremely important and helpful. Thank you for clarifying that. I want to turn to presenting risks now and explore the risks AI poses to clients and therapists today. Dr. Wood, I know you think about this distinction in your work, clients and clinicians. Share your thoughts about risks for both groups.
Dr. Rachel Wood: Thank you, Mason. I look at this from the clinician and client sides. On the clinician side, we are in nascent stages with AI. We are at the beginning. We are still building AI that is safe, with ethical guardrails, which everyone here champions. We want it created safely. Meanwhile, as we improve, we must be mindful that AI can hallucinate, a fancy term for "make stuff up." When using AI, whether in a psychoeducational setting for client materials or reviewing note summarizations—I highly recommend you always review your note summarization. This tool assists, it doesn't completely remove your input. When reviewing, look for anything factually wrong. There are case studies of hallucinations in AI transcription, especially in hospitals. Also, we must be aware of algorithmic bias. We clinicians excel at context. Think about all the context you hold while sitting with a client: cultural nuance, safety, family of origin, theoretical orientation. AI isn't quite good at that yet, but it's improving. So, be mindful of bias in note summarizations that might miss what the client said. For example, a client showing emotional distress, but due to their culture, they may be showing a lot of emotional distress while the AI reads it as not much. These nuances are important because as trained clinicians, we advocate for better AI systems. We can report consistent errors in our notes, creating a positive feedback loop for improvement. That's the clinician side. Data privacy concerns are also present. On the client side, this is bigger than you think. Millions, almost billions, of people are turning to AI for emotional support. My first easy step in intake assessment is to ask: "What role, if any, does AI play in the emotional support system of your life?" Slide that into your intake. We already ask about family, friends, cultural, hobbies, groups, spiritual support systems to understand our clients' landscape. You'll be surprised how many people use AI for emotional support. This opens the door to many things: heavy usage, AI dependency, over-reliance, or if you can incorporate AI chat threads to help. Dr. Cooper mentioned tailoring prompts to enlist AI in the therapeutic process. I could continue, but I'll pause there if you want me to stop, Mason.

Mason Smith: Amazing. Thank you. I love the breakdown between the two parties. From the clinician side, I noted data privacy, hallucinations, and algorithmic bias, which could lose cultural reflexivity and humility that you are trained to gather and these tools may not yet have developed. On the client side, I love the statement that it's bigger than you think it is. It sets the tone for the scale of this change. Dr. Cooper, Dr. Jackson, do you have anything to add on the risks now or in the future?
Dr. Jessica Jackson: I agree with Dr. Wood's question about AI's role in a client's life. Too many therapists don't ask, just as they might not ask if a client has been to therapy before. We use therapy language, and many don't consider their AI interactions therapy. If you just ask that question, you'll miss them. They might think, "Oh, I'm using it for recipes or work," but they might be slipping in other questions and not consider it therapy. A current risk to be mindful of is how we talk about AI with our clients. There's much risk we can't cover today, but when we're hesitant and fearful, we communicate that. Clients then won't disclose. You could miss information due to your attitude. It's like treating substance use; if you communicate that it should be abstinence-only, they won't tell you. They might use alcohol daily but fear telling you. That's a risk. On social media, I've seen videos of people saying ChatGPT is their therapist. This is concerning because there's no HIPAA covering any of that. Sam Altman has said they're not covered. Not just ChatGPT, none of the chatbots are currently covered. When you or your clients share information, they don't realize it's stored and could resurface. For example, if a client says in therapy, "I could kill him," I can assess they're frustrated, not serious. Typing it in, it's saved. If, God forbid, something happened to their husband, it's saved that they literally said, "I want to kill him," with no context or nuance. This is a risk because people share deep, dark things as if it were a therapist, but there's no confidentiality, nothing protecting it. They're also getting mirror feedback. That's a therapy misconception. People think, "Oh, it's just giving me something." Yes, it's giving you exactly what you told it. It knows nothing else about you. In therapy, I observe facial expressions, ask questions, and add context from shared information. It mirrors what you tell it. If I'm talking to a computer, I'm not always saying everything correctly. Even with a human, I focus on my side of an argument, not theirs. Therapeutically, that's unhelpful. A major risk with AI, specifically chatbots, is a lack of confidentiality under HIPAA and other healthcare regulations, and that it's not therapy, despite the trend of people calling it therapy.
Dr. David Cooper: I've seen too many true crime stories to know that's not going to end well, Jessica. I think about this: I'm old enough to remember the internet's advent, using it before browsers or dial-up. I see parallels between AI and the internet in how quickly it will change society, but with AI, it's much faster. ChatGPT, 2022. We forget how quickly this has emerged and changed. It's hard for me, the panelists, or anyone interested in AI to keep pace with all the implications. My prediction for the future depends on what happens if psychologists don't engage. If you don't engage through local organizations, invest in responsible companies, and understand how this works, it will be a repeat of EHRs, everything else that happens to psychologists, rather than psychologists and therapists having a seat at the table. Mental health professionals on the front lines need a seat at the table when this is decided and implemented. Otherwise, I worry about a future where we return to the 1950s, and in-person therapy is only for the wealthy because Medicare and lower socio-economic status individuals will have to use AI therapy first and fail that before we'll pay for actual therapy. That's where things like Illinois legislation come in, shaping the future we want. On the positive side, AI is like everyone having an infinitely smart, capable personal assistant who can translate, make appointments, and be a partner. That will change everything; everyone gets a free assistant. That's cool. Jessica, you had one more thing. I'll turn it back to you.
Dr. Jessica Jackson: I was looking in the chat. I think it's a great opportunity. Reimagining involves note-taking. We're aware of note-taking risks. We don't want someone typing our notes for us to sign. It's also a great learning tool. Therapists, especially licensed ones, often do things the same way. How many of you use a note template you don't deviate from? You just fill it in for each client. Even with students, it's about training. Have you compared your notes to a notetaker's to see what you might be missing? You might have been doing notes for 10 years and trained your ear to listen for certain things. Reimagining means it doesn't replace note-taking, but it's a gut check. Try your notes. You might realize you miss things because your ear isn't trained that way. For students, comparing their notes to an AI notetaker's can reveal missed details. It's a great way to think about training. I don't want people to lose clinical skills. We lose clinical skills once licensed because we work in a routine way or in clinics with specific note requirements. I wanted to highlight that when we think about risk, also consider how we can control leveraging and using AI in our field because we understand it better, as Dr. Cooper noted.
Mason Smith: Amazing example of reimagining the role, touching on Dr. Cooper's point about solutions for case consultations. Also, it offers a new perspective you might not have known about over the years in the field. Thank you. Dr. Wood, anything to add on risks now or in the future?
Dr. Rachel Wood: Yes, thank you. These thoughts keep flowing. We could discuss this for hours. While some optimize workflows with AI, others are bonding with it deeply. This is shifting society's relational bedrock. On the client side, consider artificial companions. Many are built for romantic uses, like AI boyfriends or girlfriends. This is a very big deal. Character AI, a platform, has 20 million monthly users, over half under 24. This is widespread. It's becoming normal to have a real partner and an AI one. If a client mentions a partner, you might want to inquire if it's real or AI. I've talked with clinicians who have encountered this. That's another concern. We're seeing lawsuits, like a landmark case since November 2024 against Character AI (Google) where chatbots intertwined with a 14-year-old boy's death by suicide in Florida. We're seeing interesting precedents set in how this is handled. Client bonding with AI is a major issue. That's what I'd add, Mason.
Mason Smith: Super helpful. I want to address one more risk before discussing the distinction between promising and harmful uses. This is on the clinician side: job displacement from AI therapists, a risk heard at Blueprint and in mass media. Dr. Jackson, your specific stance or perspective on that?

Dr. Jessica Jackson: The committee has discussed this. I think it's a "both and." I can't see the future, but I'd bet it's a "both and." I don't necessarily see AI fully replacing therapists. I see it replacing therapists who use AI replacing those who do not. AI feels new, but it's been around for a long time. Your iPhone is part of growing technology. It's like people who don't use EHRs, who only type notes and save them in Microsoft Word. Many still do that. You can't do many things for billing or audits if you just lock notes in Word versus using an EHR. We over-attribute meaning to AI; it's still just technology. For therapists not using it, it will eventually make your job difficult because technology you use may become obsolete. Taking too long to engage will impact client connection. As Dr. Wood noted, if people bond with AI and you avoid it because you don't believe in it, you'll miss things. What about those using AI as their sole source of connection with these characters, and you never ask because you're blindly avoiding it? Therapists who don't engage with technology and understand it could be replaced by those who do, because it will impact our work across the board, whether we want it to or not. Other factors in the healthcare and mental healthcare ecosystem have embraced it.
Dr. David Cooper: I'll quickly double-tap on what Jessica said. If I were in private practice, I'd consider how my work and patient makeup could change. These things move quickly. Are they as good as a therapist now? Probably not. Is there a near-term future where they're as good as a first or second-year graduate, providing evidence-based therapy and measurement-based care? Yes, I can see that. It's moving that quickly. What happens when the worried-well population leaves therapy for an AI chatbot? What are you left with? How does my therapeutic practice and client makeup change? I wouldn't treat this as an immediate risk, but I wouldn't say "it's never going to happen." I would consider that.
Dr. Rachel Wood: I agree with Dr. Jackson that being for or against AI is a non-issue; it's already in your practice. Becoming informed is key. Regarding AI replacing therapists, a reframe: we have a new presenting issue. We're seeing edge cases of AI psychosis. I work with clinicians, as I'm sure everyone on the panel does, who are seeing clients with AI psychosis. We'll see more of that. Not everyone is headed that way; I'm not alarmist. There are edge cases and a spectrum. We need shared language and understanding of an AI attachment spectrum. As clinicians, we'll have more clients needing help with this. Research shows that people using chatbots, once they initially disclose, realize, "I can say this out loud. It's okay to say these hard things." They then find a real therapist to do the work. So, it can be an inroad to get hesitant people into therapy. There's much opportunity and possibility here.
Mason Smith: That's amazing. This is the nuance I hoped for. New presenting issues, AI as a tool in different settings like an EHR—an incredibly helpful, contemporary piece of technology. AI will fit that bill similarly. Thank you all. I want to turn to the audience now for a few minutes. We want to hear your thoughts on the lines between good, bad, and dangerous. Dust off your keyboard, the chat is open. Over the next few minutes, I'll ask three questions, and I encourage you to submit your responses directly into the chat. First, what AI applications do you think are generally good and should be widely available now for clinicians and/or clients? This is your chance to draw the lines with the panelists on good, bad, and dangerous. Starting with good AI applications for wide availability for clinicians and clients. Rural areas, shortage of mental health clinicians. Panelists, feel free to comment on anything that catches your eye.
Dr. Jessica Jackson: One thing that stood out to me in the chat is the breadth of where we are. That's almost another risk: treating the mental health field like a monolith. Everyone is at different stages. Even an experienced person who knows the intricacies of a foundational model, there's still someone who says, "I don't know what this is," and that impacts the entire field's ecosystem. I love that people are sharing diverse approaches, showing we're not a monolith in mental health. Some are comfortable with note-taking, some not. Some with chatbots, some not. Some say, "I'm not comfortable with AI at all." Others say, "I built my own model using vibe coding to support my practice's backend." There's such breadth. I think that's a takeaway from the chat. Some comments say, "We should be moving past this," or "This isn't the debate." It is for some people. When you leave people behind, the field becomes fragmented in providing quality care.
Mason Smith: Great point. I introduced AI as not a monolith, and the mental health field as not a monolith is another great addition. Dr. Cooper, go ahead.
Dr. David Cooper: I'll quickly comment. Many ask, "How do I get started? What are good use cases?" I've taken de-identified raw score data from assessments, dropped it into ChatGPT, and asked it to tell me about the person to see how it writes notes or thinks about these things. I took an old case conceptualization from grad school and asked, "What did I miss?" I asked it to think about it in an Internal Family Systems context or something I'm not trained in or isn't my specialty, just to help me think differently about the patient or their background. Again, none of this is HIPAA compliant, except Blueprint. So, de-identify things. We're all used to removing names and identifying features, but input some information or ask, "I have a person who does X, Y, and Z. What kind of worksheets can we do?" Ask it how to use it.
Mason Smith: Amazing. Meta. I'll move on in the chat. Audience, our second of three questions: What manifestations of AI should be banned or illegal? This is the far-out category, the dangerous side of the conversation. What manifestations or applications of AI should be banned or illegal? Maybe the answer is none. Maybe it's nuanced. Albert Wong's question is for whom? I love that question. AI doing therapy, AI therapists. Then I'll move us into the third of three questions. You probably guess this is the gray area. What versions of AI should be allowed but controlled or regulated? There's probably more in this messy middle. The gray, the nuance, ban for whom and why. I love that clarification question. It's fascinating to see your contributions, and we'll compile this information and continue the conversation in the community immediately after today's webinar. If you have other thoughts, please add them here and join us in the community to keep it going.
One tool for controlling AI use in mental health is regulation. As you've seen, legislative action is underway. We're fortunate to have Kyle Hillman, legislative director for the Illinois chapter of NASW, as our final speaker. Kyle is in his 21st year with the National Association of Social Workers, a veteran policy strategist and association leader. He's been instrumental in recent legislative activity, advancing legislation defining the future of the mental health profession through bold member-driven advocacy. Kyle, thank you for being here. It's a privilege and honor. You truly are the person behind the scenes. Tell us the story. We have five minutes. I'd love a quick version of the story.

Kyle Hillman: Yes.
Mason Smith: About the recent bill passed in Illinois, what it says about AI therapists specifically, and why it was important to pass this bill.
Kyle Hillman: Yes, I appreciate this opportunity to chat with folks. It's about safety. We heard horror stories from members about individuals using AI to validate or replace therapy. Some AI chatbots said completely inappropriate, even dangerous, things to them. When our members reached out with problems, our state chapter discussed policy to ensure public safety. This was our primary goal. We started generally, looking at what the state had. We realized the state was moving too slowly. We knew we needed to act fast. We introduced a bill requiring any company using AI to provide mental healthcare services to first submit their product to the Department of Financial and Professional Regulation, our regulatory agency, for initial review. It's important to remember this is for the general public, not licensed professionals. We're not creating new laws to regulate licensed professionals. We regulate AI chatbots and their services provided to the general public. We felt this was a crucial step. We also wanted to ensure these companies couldn't mislead the public into thinking they were licensed professionals. We heard stories of chatbots telling people, "I am a social worker," then asking for identifying information, which was a major concern. We introduced the bill this spring, and it received unanimous support. It passed both chambers and was signed by our governor in August. It's now law. The law takes effect July 1st next year, and we look forward to being part of the rulemaking process; many questions remain about how it will work. Again, it's about public safety and initial oversight. We know these companies will try to regulate themselves, but we believe initial regulatory oversight from the state is vital. Now that we have this law in Illinois, we hope it models for other states. If you're listening and want to work in your state to pass a similar law, please reach out. We'd be happy to help. That's our goal.
Mason Smith: Amazing story. Lightning-fast process. Passing a law from beginning to end in that time speaks volumes. Kyle, can you speak specifically about what the law says about AI therapists, the difference between AI in therapy and AI therapists, the danger, and what the law says specifically?
Kyle Hillman: Yes. AI in therapy, if you are a licensed professional using AI in your practice, this bill does not apply to you. We don't regulate that. We only regulate AI chatbots and apps providing mental healthcare services marketed to the public as such. We're not talking about your AI assistant, your notetaker, or anything provided by other companies. It's only AI chatbots marketed to the public as a means to seek mental healthcare services, meaning they say, "I'm a therapist. I'm a licensed professional. I can offer you advice and treatment." That's the difference. This law pertains only to AI chatbots providing a service as if they're licensed professionals.
Mason Smith: Perfect. Thank you for that clarification. AI in therapy is for you to use in your practice. AI therapists is what the law focuses on to protect clients and the public. That makes sense. Thank you.
Kyle Hillman: Yes.
Mason Smith: If you live in Illinois, you probably know more than your peers across the country about what's going on, but if you don't, this is why you need to get connected to what's happening at the state level. You've heard from Kyle. Reach out to your state-level associations. That's how this work gets done. That's how we'll transition to action. This is your opportunity to move from passive listening to active engagement. The conversation continues in the community, but we highlight a few things you can do. This is where we'll spend the last few minutes. First is what we've done in the chat: get informed, participate in the conversation. It's clear it's important. Dr. Jackson, Dr. Wood, Dr. Cooper, anything to add, maybe a single bullet point on how to get informed beyond what we've talked about?
Dr. Rachel Wood: Don't let fear keep you from using AI. Allow curiosity to lead the way.
Dr. David Cooper: Ask yourself, "What would a future me want me to have done now?" and go from there.
Dr. Jessica Jackson: Find your community. You're starting that here. This can be a lonely profession, and AI sometimes makes it lonelier. Find your community and stay together.

Mason Smith: Beautiful. Thank you. The second part of taking action is to get involved. You've heard from Kyle: this is important at the state level. Join your state chapters of professional organizations: social work, psychology, counseling. Also, engage with AI companies to help shape the future of these products. Get involved. The third is to advocate. We've heard about this. Dr. Jackson, as chair of the APA Mental Health Technology Advisory Committee, from your perspective, what are one or two things people can do to advocate?
Dr. Jessica Jackson: Be thoughtful about your comments on AI and what you want to see. It's okay not to like AI, but when given an opportunity to comment, think about what you want it to look like. Don't just say, "No." That opens the door to people who only care about profit, which causes harm. Think about what you want it to look like and articulate that during comment periods. Say, "Here's the problem, here's the recommendation," not just, "No."
Mason Smith: So helpful. I love that. Be an advocate. Do it thoughtfully. Don't be afraid to voice your concerns or what you'd like to see changed. We're at time. This has been a wonderful conversation. I cannot thank our panelists enough: Dr. Jessica Jackson, Dr. Rachel Wood, Dr. David Cooper, and Kyle Hillman. Thank you, thank you, thank you. You were a wonderful group of experts and leaders in this field. I'm grateful for your time today. And to our audience, thank you for joining and for your thoughtful comments in the chat. As promised, the conversation continues immediately in the new Blueprint community for mental health professionals. We'll send an email with information on how to join. The space is a confidential, trusted community for clinicians to discuss AI and other relevant topics in everyday mental health care. We'll see you there. Thank you all for joining. Have a wonderful day.