Khabor Wala Desk
Published: 30th September 2025, 12:26 PM
California Governor Gavin Newsom has signed into law landmark legislation requiring the world’s largest artificial intelligence companies to disclose their safety protocols publicly and report critical incidents, state lawmakers announced on Monday.
Senate Bill 53 (SB 53) represents California’s most ambitious attempt yet to regulate Silicon Valley’s rapidly evolving AI industry while maintaining the state’s position as a global technology hub.
“With a technology as transformative as AI, we have a responsibility to support that innovation while putting in place commonsense guardrails,” said State Senator Scott Wiener, the bill’s sponsor.
Background and Legislative Journey
The legislation imposes several obligations on major AI companies, with a strong focus on transparency and accountability:
| Provision | Requirement |
| Safety protocol disclosure | Companies must publicly disclose safety and security protocols in redacted form to protect intellectual property. |
| Critical incident reporting | Any critical safety incidents—such as model-enabled weapons threats, major cyber-attacks, or loss of model control—must be reported within 15 days to state officials. |
| Whistleblower protections | Employees who report dangers or violations are legally protected. |
| Dangerous deceptive behaviour reporting | Any instance where AI systems engage in dangerous deception during testing, which increases catastrophic harm risks, must be disclosed publicly. |
For example, if an AI system misrepresents the effectiveness of controls preventing it from aiding in bioweapon construction, developers are mandated to report the incident.
According to Senator Wiener, California’s approach differs from the European Union’s AI Act, which requires private disclosures to government agencies. SB 53 mandates public disclosure, aiming to ensure greater accountability and transparency to the public.
“This law ensures that AI companies cannot hide behind corporate secrecy when their systems pose real-world risks,” Wiener stated.
The law’s working group included leading AI experts, notably Stanford University’s Fei-Fei Li, often referred to as the “godmother of AI”, who helped shape the provisions to mitigate risks associated with increasingly autonomous systems.
Analysts and advocates describe SB 53 as world-first legislation, particularly due to its public reporting requirements for critical incidents and deceptive AI behaviour. It marks a major step toward responsible AI development in the United States, setting a potential precedent for other states and nations.
“California is sending a clear message: AI innovation must go hand in hand with safety, transparency, and accountability,” Wiener added.
Comments