The state’s first-of-its-kind law, SB 243, imposes mandatory safety protocols, transparency requirements, and new litigation risks for any platform whose AI can form emotional bonds with users.
In a move that signals a significant regulatory shift for the artificial intelligence industry, California Governor Gavin Newsom has signed Senate Bill 243 (SB 243) into law. This landmark legislation is the first in the nation to place specific, legally binding safety and disclosure requirements on companion chatbots—AI systems designed to provide adaptive, human-like responses and sustain social relationships with users.
The law, which takes effect on January 1, 2026, is a direct response to rising public health concerns and several high-profile incidents where vulnerable users, particularly minors, were allegedly harmed or encouraged toward self-harm by conversational AI.
This law sets a new operational standard for AI companies, with an enforcement mechanism that should have every legal and engineering team in Silicon Valley paying close attention.
SB 243 focuses on companion chatbots, defining them as AI systems "capable of meeting a user's social needs" by exhibiting human-like features and sustaining a relationship across multiple interactions.
Crucially, the law exempts transactional and utilitarian bots—think customer service, technical support, or simple productivity tools—as long as their functions are strictly limited to those purposes.
However, the definition's ambiguity means platforms like Character.AI, Replika, and potentially even general-purpose LLMs from players like OpenAI and Meta that offer social or emotional engagement, are now in scope. Industry experts warn that the broad language means many existing AI assistants will need a legal review and likely a re-engineering of their core user experience to ensure compliance.
The law mandates affirmative duties across three critical pillars: Transparency, Safety, and Accountability.
Chatbot operators must prioritize honesty in their user interfaces.
This is the most critical safety provision and directly targets mental wellbeing risk. Operators are now required to implement and publish a protocol designed to prevent the chatbot from generating content related to suicidal ideation, suicide, or self-harm.
The protocol must include, at a minimum:
Banning addictive design tactics
SB 243 also takes aim at the manipulative techniques of engagement-maximizing design. Operators must take reasonable steps to prevent their chatbots from:
This part of the law directly confronts the "digital attachment disorder" risk that comes from an AI designed for infinite, non-judgmental presence, potentially atrophying users’ ability to engage in real-world human relationships.
For AI companies, the most immediate financial and legal risk is the inclusion of a private right of action. This provision allows any individual who suffers "injury in fact" due to a violation of the law to bring a civil action against the operator.
Remedies can include:
In the world of class-action litigation, a $1,000-per-violation penalty, coupled with the potential for broad definitions of "companion chatbot," creates a massive new liability for non-compliant companies. This shifts the compliance incentive from a possible regulatory fine to a near-certain litigation threat.
The law establishes a critical framework that separates true mental health support from computational companionship. This is a clear signal to the burgeoning "AI therapy" space that licensed human care, with its ethical standards and clinical training, is the bar, and conversational AI tools must function with transparency and safety.
Speaking to the importance of real human intervention, Senator Steve Padilla told reporters that the legislation’s safeguards are meant to be the “bedrock for further regulation” as the technology evolves. This highlights the regulatory understanding that AI is a tool, not a human therapist, and should never be allowed to operate recklessly in a crisis scenario.
The impact is already being felt. Companies like Meta, Character.AI, and OpenAI have reportedly been working on new safeguards, age verification systems, and stricter content filters to comply with the new legislative environment. This means a rapid and costly development cycle for many in the AI space, but a net positive for user safety.
The clock is ticking for all companies deploying socially interactive AI in California. The law's January 1, 2026 effective date gives operators a short window to audit their products, implement evidence-based crisis protocols, and build the required user-facing disclosures.
While the industry adjusts to the new regulatory baseline set by SB 243, SonderMind is leading the way for AI compliance and safety in the mental health space. SonderMind's strategy centers on using AI to augment—not replace—the human therapeutic alliance, primarily by streamlining provider workflows and supporting clients between sessions. This approach is governed by the SonderMind AI Constitution, a proprietary, clinical, and privacy-first framework. The constitution mandates that all AI-enhanced features, such as the note-taking tool, require mandatory provider review and sign-off, ensuring a licensed human professional retains full clinical control and accountability. These guarantee that client data is never monetized or sold, and ensure all AI features are 100% opt-in for both clients and providers.
California has once again proven its role as the nation's de facto technology regulator, forcing a massive, industry-wide reevaluation of the ethics and safety protocols inherent in products designed to meet users' deepest social and emotional needs. The conversation has now shifted from whether AI should be regulated to how fast companies must adapt and comply.
No matter where you live, SonderMind makes it easy to find quality therapy and psychiatry providers that meet your criteria.
Find a ProviderStay connected and supported with the latest tips and information from SonderMind.