Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Using ChatGPT in Your Practice? Why your Data Is Less Private Than You Might Think

AI in Therapy
 • 
Sep 8, 2025

Using ChatGPT in Your Practice? Why your Data Is Less Private Than You Might Think

Blueprint’s AI Assistant for Therapists

  • 5 sessions free
  • No credit card required
  • HIPAA compliant

In Brief

In July 2025, it happened quietly, the way most privacy breaches do. No splashy headlines at first, just a trickle of reports from people who had stumbled across something they shouldn’t have: private ChatGPT conversations, containing everything from personal letters to therapy-style prompts, were showing up in Google search results.

That data wasn't hacked or stolen. They were simply there: indexed and discoverable by anyone who knew what to type. Among the exposed chats were detailed descriptions of health symptoms as well as personal accounts of trauma and relationships. People had shared intimate details of their lives into what they thought was a private exchange without knowing hours later, strangers could read them in a search result.

ChatGPT’s parent company OpenAI eventually disabled the feature and blocked indexing. But what had already been crawled by search engines could linger in cached versions or screenshots. This incident highlights a deeper truth: once data slips into the open, there’s no guarantee it can be pulled back.

Why “De-Identified” Doesn’t Always Mean Safe

Even when pasting parts of session notes into ChatGPT, a common explanation is, “I took out names and PHI.”

The trouble is, data doesn’t have to be overtly identifiable to be traceable. A combination of details, the timing of an event, the nature of a diagnosis, the description of a location, can be enough to reassemble the whole picture. Security researchers call this process “re-identification.” 

The market for health data is another layer of risk. On the dark web, a single medical record can sell for around $60, far more than a stolen Social Security number or credit card. This is in part because health records include myriad data points: addresses, insurance details, family connections, and often a deep personal narrative. This is the kind of information that can be exploited for years.

When that data leaves a HIPAA-compliant environment, it’s not just a privacy concern. It’s a professional liability.

The Limits of General-Purpose AI for Clinical Work

The indexing incident is only one example of why general-purpose AI tools like ChatGPT are a poor fit for therapeutic care and clinical workflows.

On ChatGPT’s free, Plus, and Team plans, there’s no Business Associate Agreement (BAA), the legal contract that makes a vendor accountable under HIPAA. Without it, there’s no formal guarantee your data will be stored or handled in compliance with the law. The only way to get a BAA is through OpenAI’s Enterprise tier, which is designed for organizations buying at least 100 seats, far beyond the needs (and budgets) of most private practices.

Unless you’ve opted out, anything you type may also be used to train future models. Even with the opt-out enabled, certain kinds of feedback can still be retained. Once your words enter that system, you lose control over where they might surface.

Beyond privacy, ChatGPT was never built to handle the nuances of therapy. It doesn’t remember treatment goals, revisit themes, or connect today’s session to your client’s last visit. Each interaction stands alone without a clear connection to the work you’ve done with your client.

The Professional Ethics of AI Technology in Therapy

Positive treatment outcomes in therapy will always rest on the trust and relationship you’ve developed with your clients. That rapport is why clients tell you things they may never share with anyone else. It’s the thing that enables therapy to be therapeutic. 

The American Counseling Association’s Code of Ethics is clear on how AI should be used. Clinicians should understand how the tools they use process data, be able to explain that clearly to clients, and give clients the option to opt out of AI in their care. That means knowing exactly where your data goes, how long it stays there, and who might access it.

With a general-purpose AI tool like ChatGPT, those answers aren’t always straightforward and can change overnight as features are added or policies are updated.

What Strong Safeguards Look Like in Practice

Experts in health data privacy say the safest systems share a few common traits: 

  • They provide a signed BAA to any covered entity that requests one, without requiring a large-scale or enterprise contract. 
  • They make it explicit when and how data is stored, and they give users control over how long it’s kept. 
  • Deletion should be simple, immediate if needed, and verifiable.

In systems designed for clinical work, encryption is standard. Audit trails record who accessed information and when. And critically, personal health information is never used for model training unless the clinician has given informed consent for that specific purpose. With fit-for-purpose AI tools for therapists specifically, some best practices for privacy and security include HIPAA compliance, enterprise-grade encryption, yearly external audits, for example.

Those features aren’t just nice to have. They’re baseline requirements in order to keep protected health information out of the wrong hands.

Questions Worth Asking Before Using AI in a Clinical Setting

Before any clinician adopts an AI tool – whether it’s a general-purpose chatbot or a purpose-built clinical assistant – privacy specialists recommend asking:

  • Is there a signed BAA available to me?
  • Do I know exactly where my data is stored and who has access to it?
  • Can I delete data completely, and how is that deletion confirmed?
  • Am I able to explain all of this to a client in plain language?

The answers won’t be the same for every tool. But the absence of a clear, confident “yes” to these questions is a signal that further scrutiny is needed.

An Ongoing Risk

The exposure of ChatGPT conversations in search results was resolved quickly. Yet, it underscored a tension that isn’t going away: AI can be powerful, helpful, and time-saving, but it can also be susceptible to misuse in ways that are sometimes unknown until it’s too late.

For mental health professionals, the choice to integrate AI into their work usually starts as a way to lighten their cognitive and administrative load. But that decision also directly intersects with professional ethics, laws, and the trust their clients place in them.

Those stakes mean every new tool must be weighed not only for its capabilities, but also for its ability to safeguard the stories and data it’s entrusted with. In a field where confidentiality is everything, that measure will always matter as much as – if not more than – what the tool or platform is capable of.

Share this article
Subscribe to The Golden Thread

The business, art, and science of being a therapist.

Subscribe to The Golden Thread and get updates directly in your inbox.
By subscribing, you agree to receive marketing emails from Blueprint.
We’ll handle your info according to our privacy statement.

You’re subscribed!

Oops! Something went wrong while submitting the form.