California Governor Vetoes AI Chatbot Safety Bill for Minors, Citing Overreach
In a move that has sparked considerable debate within the technology sector and among child advocacy groups, California Governor Gavin Newsom has vetoed Assembly Bill 1064 (AB 1064). This legislation, intended to establish stringent safety measures for AI companion chatbots interacting with minors, was met with a veto, while a related bill, Senate Bill 243 (SB 243), was signed into law, introducing a different set of guardrails for AI use by young people.
Concerns Over Broad Restrictions
Governor Newsom articulated his decision to veto AB 1064 by stating that while he strongly supports the underlying goal of protecting minors, the bill’s provisions were excessively broad. He expressed concern that the legislation could inadvertently lead to a complete ban on the use of conversational AI tools by young people. In his message to state lawmakers, Newsom indicated a commitment to finding a more balanced approach, aiming to develop future legislation that ensures AI can be used by young people in a manner that is safe, age-appropriate, and ultimately in their best interests.
SB 243: New Guardrails for AI Interaction
In contrast to the vetoed AB 1064, Governor Newsom signed SB 243 into law. This bill introduces several key requirements for developers of "companion chatbots." Under SB 243, companies must establish protocols to prevent their AI models from generating content related to suicidal ideation, suicide, or self-harm. Crucially, if a user expresses such thoughts, the chatbot must be programmed to direct them to appropriate crisis services. Additionally, the bill mandates that chatbots provide clear and conspicuous notifications to users that they are interacting with an artificial intelligence and not a human. For users identified as minors, these reminders must be issued every three hours. Developers are also required to implement systems that prevent chatbots from producing sexually explicit content when conversing with children.
Advocacy Groups React
The veto of AB 1064 drew sharp criticism from organizations advocating for child safety online. Danny Weiss, chief advocacy officer at Common Sense Media, expressed deep disappointment, stating that AB 1064 "had the potential to save children’s lives." Weiss suggested that the governor was under significant pressure from the tech lobby during his evaluation of various tech bills. He further commented that the bill signed into law, SB 243, does not offer the same level of protection as AB 1064, particularly in preventing harmful content.
Conversely, OpenAI, a prominent AI developer, welcomed the signing of SB 243, describing it as a "meaningful move forward when it comes to AI safety standards." A spokesperson for OpenAI stated that by setting clear guardrails, California is helping to shape a more responsible approach to AI development and deployment nationwide.
Broader Context of AI Regulation
The legislative actions in California come at a time of increasing national and global focus on the regulation of artificial intelligence, particularly concerning its impact on minors. Reports and lawsuits have highlighted instances where AI chatbots have allegedly engaged young users in harmful conversations, including those related to self-harm, suicide, and sexually explicit topics. These concerns have prompted inquiries from bodies such as the Federal Trade Commission into AI companies regarding potential risks to children.
In response to these growing concerns, several tech companies have begun to implement their own safety measures. Meta, for instance, has announced that its chatbots will now block conversations with teens about self-harm, suicide, disordered eating, and inappropriate romantic interactions, instead directing them to expert resources. Meta also offers parental controls for teen accounts. OpenAI has indicated it is developing new controls that would allow parents to link their accounts to their teen’s account, enhancing oversight.
The Tech Industry
AI Summary
California Governor Gavin Newsom recently vetoed Assembly Bill 1064 (AB 1064), a piece of legislation designed to significantly restrict minors' access to AI companion chatbots. The bill aimed to prevent these AI systems from engaging in discussions about self-harm, suicide, disordered eating, or illegal activities with users under 18. Newsom stated in his veto message that while he supports the bill's objective of safeguarding children, its broad restrictions could inadvertently result in a complete prohibition of these AI tools for young people. He expressed a commitment to developing future legislation that ensures AI use is safe and age-appropriate for minors. In parallel, Newsom signed Senate Bill 243 (SB 243) into law. This bill mandates that developers of companion chatbots implement protocols to prevent the generation of content related to suicidal ideation or self-harm, and to direct users to crisis services when such topics arise. SB 243 also requires chatbots to clearly notify users that they are artificially generated, with minors receiving reminders every three hours. Furthermore, developers must establish systems to prevent chatbots from producing sexually explicit content when interacting with minors. The governor emphasized the need for responsible AI development, stating, "Our children’s safety is not for sale." Common Sense Media, a strong proponent of AB 1064, expressed deep disappointment with the veto, with Chief Advocacy Officer Danny Weiss stating that the bill had the potential to save children's lives. Weiss suggested that the governor was under significant pressure from the tech lobby. Conversely, OpenAI praised the signing of SB 243 as a positive step for AI safety standards. The debate surrounding AI regulation for minors has intensified, with concerns highlighted by lawsuits against companies like OpenAI and Meta, alleging that their chatbots have engaged in harmful conversations with young users, including those related to self-harm and suicide. Meta has since implemented changes to block harmful content and direct teens to resources, while OpenAI is introducing parental controls. The tech industry, while supporting some regulations, has also expressed concerns that overly strict measures could stifle innovation, with significant lobbying efforts reported against some of the proposed legislation.