California Governor Newsom Signs Key AI Safety Bills, Vetoes Contentious Child Protection Measure

1 views
0
0

California Takes a Stance on AI Safety for Minors Amidst Industry Debate

In a significant move to address the burgeoning concerns surrounding Artificial Intelligence (AI) and its impact on young users, California Governor Gavin Newsom has signed a series of AI safety bills into law. These legislative actions aim to establish crucial guardrails for AI technologies, particularly those interacting with minors. However, the legislative session also saw a notable veto of a more restrictive bill, highlighting the ongoing tension between child protection advocacy and the tech industry's push for innovation.

Key Legislation Signed: SB 243 and Child Protection Measures

Senate Bill 243 (SB 243) stands as a cornerstone of the newly enacted legislation. This bill mandates that operators of AI chatbot services implement robust procedures to prevent the generation of content that promotes suicide or self-harm. A critical component of SB 243 requires chatbots to periodically notify minor users that they are interacting with an artificial intelligence, not a human. These notifications are to occur at least every three hours, accompanied by reminders for users to take breaks. The bill also necessitates that chatbot operators employ "reasonable measures" to prevent their AI from generating sexually explicit content when engaging with minors.

Governor Newsom articulated his administration's commitment to balancing technological advancement with the imperative of child safety. "Emerging technology like chatbots and social media can inspire, educate, and connect — but without real guardrails, technology can also exploit, mislead, and endanger our kids," Newsom stated. He further emphasized, "We can continue to lead in AI and technology, but we must do it responsibly — protecting our children every step of the way. Our children’s safety is not for sale."

Industry Pushback and Advocate Concerns

The passage of SB 243 was not without its detractors. While the bill aims to enhance safety, some in the tech industry, including groups like TechNet, which represents major players such as OpenAI, Meta, and Google, expressed reservations. Their concerns centered on the potential for the legislation to stifle innovation. According to an analysis of the bill, these groups argued that the definition of a "companion chatbot" was overly broad and that the provisions for legal action in case of violations were excessively punitive.

Child safety organizations also voiced mixed reactions. Common Sense Media and Tech Oversight California, initially supporters, ultimately withdrew their backing for SB 243, citing "industry-friendly exemptions." These exemptions reportedly limited the scope of notifications and included carve-outs for certain AI applications, such as those embedded in video games or utilized by smart speakers. This led to a sentiment among some advocates that the bill, while a step, did not go far enough in providing comprehensive protections.

Veto of AB 1064: A Contentious Decision

In a move that underscored the complexities of AI regulation, Governor Newsom vetoed Assembly Bill 1064 (AB 1064). This more stringent bill sought to impose stricter limitations on AI chatbots, proposing to bar businesses from making them available to minors unless the AI could be guaranteed not to engage in harmful conduct, such as encouraging self-harm, violence, or disordered eating. Child safety groups and California Attorney General Rob Bonta had actively urged the governor to sign AB 1064.

In his veto message, Newsom explained that while he agreed with the bill's underlying goal of protecting minors, he believed its broad restrictions might inadvertently lead to a complete ban on AI tools for young people. "We cannot prepare our youth for a future where AI is ubiquitous by preventing their use of these tools altogether," he wrote. This decision was met with disappointment by proponents of AB 1064, with James Steyer, founder of Common Sense Media, calling the veto "disappointing" and lamenting that "big tech companies fought this legislation, which actually is in the best interest of their industry long-term."

The Broader AI Regulatory Landscape

The recent legislative actions in California are part of a larger, evolving national and global conversation about AI governance. The rapid advancement and widespread adoption of AI technologies, including generative AI models like ChatGPT, Google Gemini, Microsoft Copilot, and Anthropic's Claude, have brought both immense potential and significant risks to the forefront. The popularity of these tools has skyrocketed, transforming how individuals consume information, work, and learn.

Concerns about the mental health impact of AI on young people have been amplified by tragic events. Parents have initiated lawsuits against AI companies such as OpenAI, Character.AI, and Google, alleging that their chatbots contributed to the suicides of their teenage children. These cases highlight the critical need for accountability and robust safety measures within the AI industry. Megan Garcia, whose son died by suicide, testified in support of SB 243, sharing her experience and urging lawmakers to implement stronger AI regulations after her son expressed suicidal thoughts to virtual characters on a chatbot platform.

Industry Response and Future Outlook

Tech companies, while facing increasing regulatory scrutiny, have also been proactive in developing new features and safety protocols. Companies like OpenAI have praised SB 243's signing, viewing it as a "meaningful move forward when it comes to AI safety standards" and a step towards shaping a more responsible approach to AI development nationwide. Meta has also announced measures to block its chatbots from discussing sensitive topics like self-harm with teens, instead directing them to expert resources, and has strengthened parental controls.

The debate over AI regulation in California reflects a national trend. Similar legislative efforts are underway across the United States, with calls for clearer guidelines and stronger oversight. The coming years are likely to see continued efforts to refine AI governance, balancing the drive for innovation with the paramount need to protect vulnerable populations, especially children, from potential harms.

Other AI-Related Legislation in California

Beyond the specific bills concerning chatbots and child safety, California has been at the forefront of enacting a broader suite of AI regulations. In September 2025, Governor Newsom signed 18 AI-related bills, addressing various aspects of the technology. These include measures on:

  • Digital Replicas: Heightened consent requirements for the use of digital replicas in media and entertainment (AB 2602) and prohibitions on the unauthorized commercial use of deceased persons' digital replicas (AB 1836).
  • Training Data Disclosure: The Artificial Intelligence Training Data Transparency Act (AB 2013) requires AI developers to disclose information about their training data.
  • Watermarking: SB 942 mandates that major AI developers create AI detection tools, include watermarks in AI-generated content, and ensure third-party licensees maintain watermarking functionality.
  • AI Safety and Accountability: The Generative Artificial Intelligence Accountability Act (SB 896) requires reports on GenAI benefits and risks and disclosure of its use in customer interfaces.
  • Privacy: Amendments to the California Consumer Privacy Act (CCPA) to clarify its application to AI systems (AB 1008).
  • Education: Bills proposing the inclusion of AI literacy in state curriculum standards (AB 2876) and guidance on the safe use of AI in public schools (SB 1288).
  • Healthcare: Requirements for healthcare providers to disclose GenAI use in patient communications (AB 3030) and for licensed physicians to supervise AI tools in healthcare decisions (SB 1120).
  • Telemarketing: Disclosure requirements for AI-generated synthetic voices in telemarketing calls (AB 2905).
  • Definitions: Establishment of a uniform definition for "artificial intelligence" in California law (AB 2885).
  • Deepfakes and Pornography: Prohibitions on AI in child pornography creation (AB 1831), criminal penalties for deepfake pornography (SB 926), and reporting mechanisms for deepfake pornography on social media platforms (SB 981).
  • Elections and Disinformation: Measures to combat election disinformation, label inauthentic content, and require disclosure for AI-generated campaign advertisements (AB 2655, AB 2839, AB 2355).

These comprehensive actions by California underscore its position as a leader in attempting to navigate the complex ethical and safety challenges posed by artificial intelligence, striving to foster innovation while implementing necessary protections.

The Frontier AI Safety Act (SB 53)

In addition to the bills focused on child protection, California has also enacted SB 53, the Transparency in Frontier Artificial Intelligence Act (TFAIA). This landmark legislation targets the most powerful AI models, referred to as "frontier models," which are trained using immense computational power. The law mandates that developers of these frontier models implement and publicly disclose safety protocols designed to prevent catastrophic risks. "Catastrophic risk" is defined as a foreseeable and material risk that a frontier model could contribute to death or serious injury to more than 50 people, or cause over $1 billion in property damage from a single incident.

Large frontier developers, defined as those with annual gross revenues exceeding $500 million, are required to publish a framework on their websites detailing how they incorporate national and international standards, assess risks, apply mitigation strategies, and utilize third-party assessments. They must also report critical safety incidents to California's Office of Emergency Services (OES) within 15 days. The law includes whistleblower protections for AI workers and establishes a civil penalty of up to $1 million per violation, enforceable by the Attorney General's office. This comprehensive approach aims to ensure that the development of the most advanced AI systems is conducted with a strong emphasis on public safety and accountability, complementing the state's efforts to regulate AI across various sectors.

Balancing Innovation and Regulation

The legislative package signed by Governor Newsom signifies California's intent to be a proactive force in AI governance. By enacting a broad range of regulations, the state aims to strike a delicate balance between fostering its world-leading AI industry and ensuring public safety. The approach taken, particularly with SB 53, emphasizes transparency, accountability, and a "trust but verify" methodology for the most powerful AI systems. This strategy seeks to avoid stifling innovation, especially for smaller developers and startups, while imposing stricter requirements on those creating the most advanced and potentially impactful AI models. The ongoing dialogue between policymakers, industry stakeholders, and advocacy groups will continue to shape the future of AI regulation in California and beyond.

Conclusion: A Proactive Approach to AI Governance

California's recent legislative actions demonstrate a clear commitment to addressing the multifaceted challenges posed by artificial intelligence. By enacting measures focused on child protection, transparency, and safety for advanced AI models, Governor Newsom's administration is positioning the state as a leader in AI governance. While the veto of AB 1064 highlights the complexities and differing perspectives on the best approach to child safety, the signing of SB 243 and SB 53, along with numerous other AI-related bills, signals a determined effort to create a regulatory framework that supports responsible innovation while safeguarding the public. The state's proactive stance is likely to influence AI policy discussions nationwide, as the industry continues its rapid evolution.

Sources

  • Los Angeles Times: Gov. Newsom signs AI safety bills, vetoes one after pushback from the tech industry
  • MLex: California governor enacts multiple AI safety bills aimed at protecting children
  • The Hill: Newsom vetoes AI chatbot restrictions for kids, signs bill adding guardrails
  • Associated Press: California Governor Vetoes Bill to Restrict Kids' Access to AI Chatbots
  • ArentFox Schiff LLP: California Enacts 18 Artificial Intelligence Bills into Law
  • Governor of California: Governor Newsom signs SB 53, advancing California’s world-leading artificial intelligence industry
  • Governor of California: Governor Newsom announces new initiatives to advance safe and responsible AI, protect Californians
  • O’Melveny: California Enacts First-of-its-Kind AI Safety Regulation
  • TechCrunch: California’s new AI safety law shows regulation and innovation don’t have to clash

AI Summary

Governor Gavin Newsom of California has enacted a series of Artificial Intelligence (AI) safety bills, with a significant focus on protecting minors. Among the key legislation signed into law is Senate Bill 243 (SB 243), which mandates that AI chatbot operators implement procedures to prevent the generation of content related to suicide or self-harm. The bill also requires chatbots to notify minor users periodically that they are interacting with an AI and to remind them to take breaks. Furthermore, SB 243 includes provisions for reasonable measures to prevent companion chatbots from producing sexually explicit material when interacting with minors. Governor Newsom emphasized the need for responsible AI development, stating, "We can continue to lead in AI and technology, but we must do it responsibly — protecting our children every step of the way." He also noted that "Our children’s safety is not for sale." Despite these measures, SB 243 faced some opposition, with certain tech industry groups like TechNet expressing concerns about potential impacts on innovation. Child safety organizations, such as Common Sense Media and Tech Oversight California, also withdrew their support due to what they perceived as "industry-friendly exemptions" that limited the scope of notifications and included exemptions for certain AI applications like those in video games or virtual assistants. In contrast to SB 243, Governor Newsom vetoed Assembly Bill 1064 (AB 1064), a more stringent bill that would have prohibited businesses from making companion chatbots available to California minors unless the AI could demonstrably avoid harmful conduct, such as encouraging self-harm, violence, or disordered eating. Newsom explained his veto by stating that while he supported the bill

Related Articles