California’s attorney general’s office launched a new disclosure and reporting initiative this week focused on “catastrophic risk” tied to the state’s dominant artificial intelligence (AI) industry.
The rules operationalize the core requirements of California’s landmark Transparency in Frontier Artificial Intelligence Act (SB 53), which is widely viewed as a proxy for a future federal mandate. The law requires developers of the most powerful AI systems to publicly explain how they identify, assess, and mitigate risks that could lead to mass casualties or extreme economic damage.
Under SB 53, California became the first U.S. jurisdiction to mandate formal, recurring disclosure of catastrophic-risk management for so-called frontier AI models.
The attorney general’s initiative establishes a reporting channel for employees and other “covered” individuals to flag potential violations or safety threats directly to regulators. Covered employees may disclose information when they have “reasonable cause to believe” that a developer’s activities pose “a specific and substantial danger to the public health or safety resulting from a catastrophic risk,” or violate the statute itself.
The state attorney general is required to publish an annual, anonymized report summarizing these disclosures.
The law defines catastrophic risk as a foreseeable risk that could result in the death of, or serious injury to, more than 50 people, or more than $1 billion in property damage. Covered risks include weapons enablement, autonomous criminal activity, major cyberattacks, or the loss of human control over AI systems.
For large frontier developers—those with more than $500 million in annual revenue—the obligations are extensive. Firms must publish a “Frontier AI Framework” detailing governance structures, cybersecurity practices, and how catastrophic risks are identified and mitigated across a model’s lifecycle. These frameworks must be updated at least annually and disclosed publicly, with limited redactions permitted to protect trade secrets or security-sensitive information.
Developers must also file transparency reports before deploying new frontier models and report critical safety incidents to California’s Office of Emergency Services on an accelerated timeline.
When the legislation was signed last year, Governor Gavin Newsom described SB 53 as a “trust but verify” approach designed to balance innovation with public safety. California, he argued, can “establish regulations to protect our communities while also ensuring that the growing AI industry continues to thrive.”
The law builds directly on a state-commissioned scientific report on frontier AI risks and positions California as a de facto standard-setter amid the continued absence of comprehensive federal AI legislation.