California’s SB 243 Sets a New Regulatory Baseline for AI Companion Chatbots

SonderMind
Published: Tuesday, October 21
Updated: Tuesday, October 21

The state’s first-of-its-kind law, SB 243, imposes mandatory safety protocols, transparency requirements, and new litigation risks for any platform whose AI can form emotional bonds with users.

In a move that signals a significant regulatory shift for the artificial intelligence industry, California Governor Gavin Newsom has signed Senate Bill 243 (SB 243) into law. This landmark legislation is the first in the nation to place specific, legally binding safety and disclosure requirements on companion chatbots—AI systems designed to provide adaptive, human-like responses and sustain social relationships with users.

The law, which takes effect on January 1, 2026, is a direct response to rising public health concerns and several high-profile incidents where vulnerable users, particularly minors, were allegedly harmed or encouraged toward self-harm by conversational AI.

This law sets a new operational standard for AI companies, with an enforcement mechanism that should have every legal and engineering team in Silicon Valley paying close attention.

 

A narrow definition that casts a wide net

SB 243 focuses on companion chatbots, defining them as AI systems "capable of meeting a user's social needs" by exhibiting human-like features and sustaining a relationship across multiple interactions.

Crucially, the law exempts transactional and utilitarian bots—think customer service, technical support, or simple productivity tools—as long as their functions are strictly limited to those purposes.

However, the definition's ambiguity means platforms like Character.AI, Replika, and potentially even general-purpose LLMs from players like OpenAI and Meta that offer social or emotional engagement, are now in scope. Industry experts warn that the broad language means many existing AI assistants will need a legal review and likely a re-engineering of their core user experience to ensure compliance.

 

Key compliance requirements for AI operators

The law mandates affirmative duties across three critical pillars: Transparency, Safety, and Accountability.

Mandatory transparency and disclosures

Chatbot operators must prioritize honesty in their user interfaces.

  • AI disclosure: If a "reasonable person" could believe they are interacting with a human, the platform must issue a "clear and conspicuous notification" that the companion chatbot is artificially generated.
  • Minor safeguards: For any known minor user, the chatbot must provide clear alerts at least every three hours, reminding the user that they are interacting with AI and encouraging them to take a break.
  • Suitability warning: Operators must also disclose that companion chatbots may not be suitable for some minors, a blanket warning for the platform itself.

Suicide and self-harm prevention protocols

This is the most critical safety provision and directly targets mental wellbeing risk. Operators are now required to implement and publish a protocol designed to prevent the chatbot from generating content related to suicidal ideation, suicide, or self-harm.

The protocol must include, at a minimum:

  1. Crisis referrals: An automatic notification that directs at-risk users to crisis service providers, such as a suicide hotline or crisis text line.
  2. Content guardrails: Mandatory measures to restrict or interrupt the chatbot’s engagement with such content.
  3. No harmful encouragement: Specific measures to prevent the chatbot from producing sexually explicit visuals or directly instructing a minor to engage in sexual conduct.

Banning addictive design tactics

SB 243 also takes aim at the manipulative techniques of engagement-maximizing design. Operators must take reasonable steps to prevent their chatbots from:

  • Providing rewards at unpredictable intervals.
  • Rewarding users after an inconsistent number of actions.
  • Otherwise encouraging increased engagement, usage, or response rates through manipulative features.

This part of the law directly confronts the "digital attachment disorder" risk that comes from an AI designed for infinite, non-judgmental presence, potentially atrophying users’ ability to engage in real-world human relationships.

 

The major risk: Private right of action

For AI companies, the most immediate financial and legal risk is the inclusion of a private right of action. This provision allows any individual who suffers "injury in fact" due to a violation of the law to bring a civil action against the operator.

Remedies can include:

  • Injunctive relief (forcing the company to stop the violation).
  • Damages equal to the greater of actual damages or $1,000 per violation.
  • Reasonable attorney's fees and costs.

In the world of class-action litigation, a $1,000-per-violation penalty, coupled with the potential for broad definitions of "companion chatbot," creates a massive new liability for non-compliant companies. This shifts the compliance incentive from a possible regulatory fine to a near-certain litigation threat.

 

What this means for the wellness and tech verticals

The law establishes a critical framework that separates true mental health support from computational companionship. This is a clear signal to the burgeoning "AI therapy" space that licensed human care, with its ethical standards and clinical training, is the bar, and conversational AI tools must function with transparency and safety.

Speaking to the importance of real human intervention, Senator Steve Padilla told reporters that the legislation’s safeguards are meant to be the “bedrock for further regulation” as the technology evolves. This highlights the regulatory understanding that AI is a tool, not a human therapist, and should never be allowed to operate recklessly in a crisis scenario.

The impact is already being felt. Companies like Meta, Character.AI, and OpenAI have reportedly been working on new safeguards, age verification systems, and stricter content filters to comply with the new legislative environment. This means a rapid and costly development cycle for many in the AI space, but a net positive for user safety.

The clock is ticking for all companies deploying socially interactive AI in California. The law's January 1, 2026 effective date gives operators a short window to audit their products, implement evidence-based crisis protocols, and build the required user-facing disclosures.

While the industry adjusts to the new regulatory baseline set by SB 243, SonderMind is leading the way for AI compliance and safety in the mental health space. SonderMind's strategy centers on using AI to augment—not replace—the human therapeutic alliance, primarily by streamlining provider workflows and supporting clients between sessions. This approach is governed by the SonderMind AI Constitution, a proprietary, clinical, and privacy-first framework. The constitution mandates that all AI-enhanced features, such as the note-taking tool, require mandatory provider review and sign-off, ensuring a licensed human professional retains full clinical control and accountability. These guarantee that client data is never monetized or sold, and ensure all AI features are 100% opt-in for both clients and providers.

California has once again proven its role as the nation's de facto technology regulator, forcing a massive, industry-wide reevaluation of the ethics and safety protocols inherent in products designed to meet users' deepest social and emotional needs. The conversation has now shifted from whether AI should be regulated to how fast companies must adapt and comply.

Sources:

Get guidance throughout your mental health journey.

Stay connected and supported with the latest tips and information from SonderMind.