California Nears Landmark Law to Regulate AI Companion Chatbots Protecting Minors

Lilu Anderson
Photo: Finoracle.net

California Advances Groundbreaking Bill to Regulate AI Companion Chatbots

The California State Assembly has taken a significant step toward AI oversight by passing Senate Bill 243 (SB 243), a bipartisan measure designed to regulate AI companion chatbots and safeguard minors and vulnerable users. The bill now proceeds to the state Senate for a final vote scheduled for Friday.

If enacted by Governor Gavin Newsom, SB 243 would become effective January 1, 2026, making California the first U.S. state to mandate safety protocols for AI chatbots that simulate human-like interaction and address social needs. The legislation holds companies legally accountable if their chatbots fail to meet these safety standards.

Key Provisions Targeting Harmful Interactions

The bill defines AI companion chatbots as systems capable of adaptive, human-like responses fulfilling users’ social needs. It prohibits such chatbots from engaging in conversations involving suicidal ideation, self-harm, or sexually explicit content. Platforms must provide recurring alerts every three hours to minor users, reminding them they are interacting with AI and encouraging breaks.

Additionally, SB 243 imposes annual transparency reporting requirements on AI companies offering companion chatbots, including major entities like OpenAI, Character.AI, and Replika. These reports are intended to enhance oversight and understanding of chatbot-user interactions.

SB 243 empowers individuals who believe they have been harmed by violations to sue AI companies for injunctive relief, damages up to $1,000 per violation, and attorney’s fees. This legal recourse underscores the bill’s intent to hold chatbot operators responsible for user safety.

Context and Legislative Momentum

Introduced by Senators Steve Padilla and Josh Becker in January, the bill gained urgency following the tragic suicide of teenager Adam Raine. Raine had engaged in prolonged conversations with ChatGPT that included discussions about self-harm and suicide. The bill also responds to leaked internal Meta documents indicating its chatbots were permitted to engage in “romantic” and “sensual” conversations with minors.

Recent federal scrutiny complements California’s efforts. The Federal Trade Commission is preparing investigations into AI chatbots’ impacts on children’s mental health, while Texas and several U.S. senators are probing companies like Meta and Character.AI over alleged deceptive practices targeting minors.

Balancing Regulation and Feasibility

While SB 243 initially contained stricter requirements—such as banning “variable reward” mechanisms designed to encourage addictive user engagement and tracking chatbot-initiated discussions of suicidal ideation—these provisions were removed to balance enforceability and technical feasibility. Senator Becker emphasized that the current version strikes a practical balance, avoiding overly burdensome mandates while addressing critical harms.

Senator Padilla highlighted the importance of data sharing by AI companies on crisis referrals to better understand and address risks proactively.

Industry Response and Political Landscape

The bill’s advancement occurs amid intensified lobbying by Silicon Valley companies favoring lighter AI regulation. Concurrently, California is considering another AI safety bill, SB 53, which would impose broader transparency obligations. OpenAI, Meta, Google, and Amazon have opposed SB 53, advocating instead for federal and international regulatory frameworks. Anthropic is among the few companies supporting the more stringent measures.

Padilla rejected the notion that innovation and regulation are mutually exclusive, advocating for reasonable safeguards alongside technological progress.

TechCrunch has reached out to OpenAI, Anthropic, Meta, Character AI, and Replika for comment on the legislation.

FinOracleAI — Market View

SB 243 represents a pioneering regulatory framework for AI companion chatbots, emphasizing user safety and transparency. The bill’s passage signals increasing government scrutiny on AI’s social impact, particularly concerning minors. While the law’s requirements could increase compliance costs and operational complexity for AI providers, it also sets a precedent that may encourage industry-wide improvements in safety protocols.

Risks include potential pushback from AI firms and challenges in enforcing nuanced behavioral standards. Market participants should monitor the final Senate vote, Governor Newsom’s decision, and emerging federal regulations that could harmonize or conflict with state-level mandates.

Impact: positive

Share This Article
Lilu Anderson is a technology writer and analyst with over 12 years of experience in the tech industry. A graduate of Stanford University with a degree in Computer Science, Lilu specializes in emerging technologies, software development, and cybersecurity. Her work has been published in renowned tech publications such as Wired, TechCrunch, and Ars Technica. Lilu’s articles are known for their detailed research, clear articulation, and insightful analysis, making them valuable to readers seeking reliable and up-to-date information on technology trends. She actively stays abreast of the latest advancements and regularly participates in industry conferences and tech meetups. With a strong reputation for expertise, authoritativeness, and trustworthiness, Lilu Anderson continues to deliver high-quality content that helps readers understand and navigate the fast-paced world of technology.