California Enacts First US Rules for AI 'Companion' Chatbots

Summary

California enacted the first state law imposing safeguards on “companion” chatbots, requiring them to self-identify as artificial, filter sexual and self-harm content when interacting with minors, and report signs of suicidal ideation to the Office of Suicide Prevention. The law mandates regular reminders to users that they are engaging with AI and sets protocols for age-appropriate content. Earlier, stronger versions of the bill—such as mandated third-party audits and broader protections for all users—were dropped following industry lobbying, leading several advocacy groups to withdraw support and criticize the bill as too weak. Governor Newsom justified the law as essential for child protection amid rapid AI development. This law, part of a broader legislative package on AI, cements California’s leadership in AI regulation. However, critics and developers warn of possible practical challenges, including ambiguous enforcement, potential chilling effects on beneficial conversations, and difficulty verifying user ages, raising questions about real-world impact and feasibility.