How AI Is Changing Therapy

Oz Lubling
Published: Friday, October 3
Updated: Tuesday, October 14

For most of history, caring for our minds has been a profoundly human endeavor. A therapist listening in a quiet office. A parent's words of comfort. A friend sitting with you in the dark.

Today, that care sometimes arrives in the form of a chatbot. A growing number of apps promise daily check-ins, mood tracking, or even advice drawn from the same algorithms that generate spreadsheets and college essays. The promise is seductive: help that's immediate, tireless, and available at 3 a.m. But the reality is more complicated. AI in mental health is reshaping how we think about therapy and self-care—while also testing what it means to be understood.

 

What can AI in mental health actually do?

You've probably seen countless headlines about AI being applied across every industry—mental health included. Much of that coverage highlights unregulated chatbots offering emotional support, often raising concern.

At SonderMind, we're building a different kind of AI for mental health: one designed with a specific clinical purpose, in contrast to general-purpose language models like ChatGPT, Claude, or Perplexity.

AI in mental health care refers to tools that use algorithms, natural language processing, or predictive models to support therapy and wellness. These tools don't aim to replace human therapists. Instead, they help make care more accessible, structured, and consistent.

In practice, this looks like:

  • Helping clients set goals before a session.
  • Generating personalized takeaways afterward.
  • Offering daily self-care prompts such as journaling or mood tracking.
  • Providing structured exercises for people not yet ready to meet with a therapist.

Mental health AI can also streamline providers' workflows, giving them more time for what they do best: listening, empathizing, and tailoring treatment. For example, a SonderMind therapist might use AI-assisted tools to draft notes, organize treatment plans, or quickly review a client's progress between sessions. That frees up time and energy for what matters most: the therapeutic alliance.

 

What does the data about AI in mental health actually tell us?

Skepticism about AI in mental health is natural. Therapy is deeply personal and rooted in human connection. But evidence shows AI has real potential—especially when used as part of a hybrid approach.

meta-analysis of 18 randomized controlled trials involving approximately 3,500 participants found that AI chatbots using cognitive behavioral therapy (CBT)—a structured, evidence-based approach focused on changing thought patterns—significantly reduced anxiety and depression symptoms within 4–8 weeks. A review in npj Digital Medicine, a Nature Partner Journal, found conversational agents could reduce psychological distress and modestly improve wellbeing. And recently, Dartmouth researchers reported results from the first clinical trial of a generative AI therapy chatbot, which showed significant symptom improvements when the system was carefully designed around evidence-based principles.

The data isn't perfect—most studies are short-term and focus on mild to moderate symptoms—but the results suggest AI in mental health is more than hype.

 

Where AI helps, and where it struggles

The strengths of AI in mental health often show up in the practical side of care. It can simplify logistics, reduce friction, and provide continuity between sessions. Clients who use AI-powered journaling tools or reflection prompts often arrive at therapy better prepared, while therapists report that AI tools make session prep smoother by helping them quickly see a client's patterns and progress.

But AI has clear limitations. It cannot replicate empathy, deep listening, or lived experience. Its responses are generated through pattern recognition rather than genuine understanding. This means that while AI can simulate a supportive tone, it cannot truly feel. AI outputs can also vary: the same question asked twice may produce different responses. In casual contexts, that flexibility can be an asset, but in therapy, it introduces risk, particularly if someone in distress receives an unclear or confusing reply.

The biggest concern, however, is crisis response. AI systems may miss signs of suicidal ideation or psychosis. That's why at SonderMind, we design our tools with automatic evaluations and safeguards that activate when risk signals appear.

 

The hybrid model: AI for mental health and human therapists working together

Imagine a client journaling daily with an AI-powered tool. Before their appointment, their therapist reviews a concise summary highlighting stress patterns, recurring negative thoughts, or successful coping strategies. In session, the therapist can focus more deeply on the client's most pressing challenges. Afterward, the client receives AI-generated takeaways—reviewed and personalized by their therapist—to reinforce progress.

This hybrid approach—AI plus human care—is our north star. The aim is to create a loop of preparation, reflection, and reinforcement. Clients stay more engaged between sessions, and therapists spend less time on paperwork and more time in conversation.

As Irena, a product manager at SonderMind, puts it: "AI gives clients the ability to work on thoughts between sessions and bring better questions to therapy. That makes therapy more effective and less overwhelming.

 

Why SonderMind’s AI for mental health is different

We're optimistic about AI's role in mental health, but also realistic. With over 13,000 licensed providers shaping our tools and strict HIPAA/ISO 27001 compliance standards, we take a cautious, clinician-first approach:

  • Clinician-guided development: Every client-facing feature is designed with licensed clinicians and launched only after clinical reviewers approve its value.
  • Client control and consent: Clients must explicitly opt in for features like session recording and can opt out of AI features at any time. Providers maintain full control over whether and how they use these tools.
  • Privacy by design: Audio is encrypted and deleted after 30 days, and transcripts are stored securely. Data is never sold or used for third-party ads.
  • Accountability and transparency: Our AI Governance Committee reviews every feature. Annual audits detect and correct bias or harmful outcomes.

 

How SonderMind is addressing ethical risks and concerns

For AI in mental health to be trustworthy, ethics must be central. Beyond privacy and bias, one of the biggest risks is creating a two-tiered system: premium human therapy for those who can afford it, and AI-only tools for everyone else.

That's why our focus is on enhancement, not replacement. We use AI to help therapists prepare for sessions more efficiently, not to replace the sessions themselves. The real challenge isn't just building AI, but ensuring it genuinely expands access to quality care rather than becoming a cheap substitute. AI should be a bridge to better mental healthcare for all, not a shortcut that papers over deeper problems of accessibility and provider shortages.

 

AI is a companion, not a threat or a replacement

The future of mental health care is not a choice between AI and humans, but a collaboration that combines the strengths of both. AI can expand access, provide continuity, and streamline therapy, but it will never replace the empathy and connection that define healing.

AI is a tool. Its impact depends on how responsibly we wield it—as a companion, not a substitute. As we continue developing AI tools at SonderMind, we remain committed to this principle: technology should amplify human care, never diminish it.

Important: AI is not a substitute for a trained mental health professional or emergency service. If you're experiencing a mental health crisis, please contact 988 (Suicide and Crisis Lifeline) or go to your nearest emergency room.

Get guidance throughout your mental health journey.

Stay connected and supported with the latest tips and information from SonderMind.