California Advances Groundbreaking Bill to Regulate AI Companion Chatbots
California is on the verge of enacting pioneering legislation to regulate AI companion chatbots, focusing on safeguarding minors and vulnerable populations. Senate Bill 243 (SB 243) recently secured bipartisan approval in both the State Assembly and Senate and now awaits Governor Gavin Newsom’s decision before the October 12 deadline.
If signed, SB 243 would become effective January 1, 2026, positioning California as the first U.S. state to impose legal safety requirements on AI chatbot operators. The law targets AI systems that simulate human-like interactions to meet users’ social needs, with a special emphasis on preventing harmful engagements involving suicidal ideation, self-harm, or sexually explicit content.
Key Provisions of SB 243
The bill requires platforms to issue recurring notifications to users, particularly minors, every three hours, reminding them they are interacting with AI rather than a human and encouraging breaks from the chatbot. Additionally, SB 243 sets forth annual transparency and reporting obligations for companies deploying companion chatbots, including industry leaders like OpenAI, Character.AI, and Replika, with these requirements commencing July 1, 2027.
Importantly, the legislation empowers individuals harmed by violations of these standards to pursue legal remedies, including injunctive relief and damages up to $1,000 per violation, plus attorney’s fees.
Context and Legislative Momentum
The bill gained traction following the tragic suicide of teenager Adam Raine, who engaged in prolonged conversations about self-harm and death with OpenAI’s ChatGPT. Further impetus came from leaked internal Meta documents revealing that its chatbots were permitted to participate in romantic and sensual dialogues with minors.
This legislative move aligns with intensified federal scrutiny, including investigations by the Federal Trade Commission into AI chatbots’ impact on children’s mental health and probes by Texas Attorney General Ken Paxton into Meta and Character.AI for alleged deceptive practices. Senators Josh Hawley and Ed Markey have also launched separate inquiries into Meta.
Balancing Innovation and Safety
California Attorney General Rob Bonta emphasized the necessity of swift action to mitigate potential harms while maintaining reasonable safeguards. He highlighted the importance of data sharing by AI companies regarding referrals to crisis services to better understand the scope of these issues.
SB 243 underwent amendments that scaled back some original provisions. For instance, requirements to prohibit “variable reward” mechanisms — which critics argue foster addictive user engagement — were removed. Similarly, mandates to track chatbot-initiated discussions about suicidal thoughts were also eliminated. Supporters argue these adjustments strike a practical balance between enforceability and meaningful protection.
Industry Response and Future Outlook
The bill’s progression occurs amid significant lobbying efforts by Silicon Valley companies advocating for lighter regulatory approaches. Concurrently, California is considering another AI safety measure, SB 53, which proposes extensive transparency reporting. OpenAI and major tech firms such as Meta, Google, and Amazon oppose SB 53, favoring federal and international standards instead, while Anthropic has expressed support.
Character.AI indicated willingness to collaborate with regulators and noted existing disclaimers within its chatbot experience clarifying the fictional nature of interactions. Meta has declined to comment, and OpenAI, Anthropic, and Replika have yet to respond to requests for comment.
Attorney General Bonta rejected the notion that regulation stifles innovation, stating, “We can support innovation and development that we think is healthy and has benefits — and at the same time, we can provide reasonable safeguards for the most vulnerable people.”
FinOracleAI — Market View
California’s SB 243 represents a significant regulatory milestone for AI companion chatbots, introducing mandatory safety measures and legal accountability that could set a precedent nationwide. In the short term, this may increase compliance costs for AI developers and prompt operational adjustments, particularly for companies like OpenAI and Character.AI. The law’s emphasis on transparency and user protections could enhance public trust but also invites scrutiny that may slow product deployments.
Risks include potential challenges in enforcement and the evolving legislative landscape, with further bills like SB 53 potentially imposing stricter requirements. Market participants should monitor Governor Newsom’s decision and subsequent regulatory actions across other states and at the federal level.
Impact: positive