California Enacts SB-53, Creating New Requirements for Developers of Frontier Artificial Intelligence Models and Related Whistleblower Provisions
On September 29, 2025, Governor Gavin Newsom signed into law Senator Scott Wiener’s SB-53, which establishes new requirements for developers of frontier artificial intelligence (AI) models. Section 2 of the law enacts the Transparency in Frontier Artificial Intelligence Act (TFAIA), which requires certain developers of frontier models to: (1) publish a frontier AI framework on their website; (2) post a report on their website before making available a new frontier model to third parties; (3) transmit to the California Office of Emergency Services catastrophic risk assessments; and (4) report critical safety incidents to the California Office of Emergency Services. In addition, Section 4 of the law creates whistleblower protections for certain employees of frontier developers.1 We summarize key provisions of the law in this Legal Update.
Section 2: The TFAIA
The TFAIA applies to frontier developers who have trained, or initiated the training of, a frontier model. Under the TFAIA, a frontier model is a foundation model (i.e., an AI model trained on a broad data set, designed for generality of output, and adaptable to a wide range of distinctive tasks), which was trained using a quantity of computing power greater than 10^26 integer or floating-point operations. Some of the TFAIA’s obligations apply to “large” frontier developers that, together with their affiliates, have annual gross revenue of more than $500 million in the preceding calendar year. Below we summarize key provisions of the TFAIA.
Publication of a Frontier AI Framework: A large frontier developer is required to write, implement, comply with, and publish on its website a frontier AI framework that applies to the organization. The large frontier developer will also need to review and update the AI framework at least once per year. If the large frontier developer makes a material modification to its frontier AI framework, it must publish the modified frontier AI framework and a justification for that modification within 30 days.
The frontier AI framework publication must describe how the large frontier developer approaches all of the following:
- National and international standards and industry-consensus best practices.
- Thresholds used to assess if the frontier model is capable of posing a catastrophic risk.
- Mitigation measures for potential catastrophic risks.
- Assessments and adequacy of mitigations as part of the decision to deploy a frontier model or use it extensively internally.
- Use of third parties to assess the potential for catastrophic risks and the effectiveness of mitigations of catastrophic risks.
- Revisiting and updating the frontier AI framework.
- Cybersecurity practices to secure unreleased model weights from unauthorized modification or transfer by internal or external parties.
- Identifying and responding to critical safety incidents.
- Internal governance practices to ensure implementation of these processes.
- Assessing and managing catastrophic risk resulting from the internal use of its frontier models.
Transparency Report: Before, or concurrently with, making a new frontier model or a substantially modified version of an existing frontier model available to a third party, a frontier developer is required to publish on its website a transparency report containing all of the following:
- The internet website of the frontier developer.
- A mechanism that enables a natural person to communicate with the frontier developer.
- The release date of the frontier model.
- The languages supported by the frontier model.
- The modalities of output supported by the frontier model.
- The intended uses of the frontier model.
- Any generally applicable restrictions or conditions on uses of the frontier model.
Large frontier developers must also address the following:
- Assessments of catastrophic risks from the frontier model conducted pursuant to the large frontier developer’s frontier AI framework.
- The results of those assessments.
- The extent to which third-party evaluators were involved.
- Other steps taken to fulfill the requirements of the frontier AI framework with respect to the frontier model.
Assessment of Catastrophic Risk: A large frontier developer is required to transmit to the California Office of Emergency Services a summary of any assessment of catastrophic risk resulting from internal use of its frontier models every three months or pursuant to another reasonable schedule specified by the large frontier developer and communicated in writing to the Office of Emergency Services with written updates, as appropriate.
Under the law, “catastrophic risk” is a foreseeable and material risk that the frontier model will materially contribute to (a) the death of, or serious injury to, more than 50 people or (b) more than $1 billion in damage to, or loss of, property arising from a single incident involving a frontier model doing any of the following:
- Providing expert-level assistance in the creation or release of a chemical, biological, radiological, or nuclear weapon.
- Engaging in conduct with no meaningful human oversight, intervention, or supervision that is either a cyberattack or, if the conduct had been committed by a human, would constitute the crime of murder, assault, extortion, or theft, including theft by false pretense.
- Evading the control of its frontier developer or user.
However, “catastrophic risk” does not include a foreseeable and material risk from any of the following:
- Information that a frontier model outputs if the information is otherwise publicly accessible in a substantially similar form from a source other than a foundation model.
- Lawful activity of the federal government.
- Harm caused by a frontier model in combination with other software if the frontier model did not materially contribute to the harm.
Reporting Critical Safety Incidents: A frontier developer is required to report any critical safety incident pertaining to one or more of its frontier models to the Office of Emergency Services within 15 days of discovering the critical safety incident. If a frontier developer discovers a critical safety incident that poses an imminent risk of death or serious physical injury, it must disclose the incident within 24 hours to an authority, including any law enforcement agency or public safety agency with jurisdiction, that is appropriate based on the nature of that incident and as required by law. The Office of Emergency Services is required to produce an annual report for the California Legislature and Governor with anonymized and aggregated information about critical safety incidents.
“Critical safety incidents” are:
- Unauthorized access to, modification of, or exfiltration of the model weights of a frontier model that results in death or bodily injury.
- Harm resulting from the materialization of a catastrophic risk.
- Loss of control of a frontier model causing death or bodily injury.
- A frontier model that uses deceptive techniques against the frontier developer to subvert the controls or monitoring of its frontier developer outside of the context of an evaluation designed to elicit this behavior and in a manner that demonstrates materially increased catastrophic risk.
The notification submitted to the Office of Emergency Services must include:
- The date of the critical safety incident.
- The reasons the incident qualifies as a critical safety incident.
- A short and plain statement describing the critical safety incident.
- Whether the incident was associated with internal use of a frontier model.
False Statements & Redactions:Afrontier developer must not make materially false or misleading statements about catastrophic risk from its frontier models. Additionally, a large frontier developer must not make materially false or misleading statements about its implementation of, or compliance with, its frontier AI framework.
Frontier developers may redact published compliance documents to protect trade secrets, cybersecurity, safety, national security, or to comply with law, but must describe the basis for the redaction (where possible) and retain unredacted information for five years.
Enforcement: The TFAIA is enforced by the California Attorney General’s office. A large frontier developer that violates specified provisions of the TFAIA or that fails to comply with its own frontier AI framework is subject to a civil penalty that depends upon the severity of the violation, but that would not exceed $1 million per violation.
Updates to Definitions: Beginning in 2027, the California Department of Technology is required to annually assess, and if necessary, make recommendations to the Legislature on updates to definitions of “Frontier model,” “Frontier developer,” and “Large frontier developer,” so that they continue to accurately reflect technological developments and standards.
Section 4: Whistleblower Protection
Section 4 of the law provides protections for whistleblowers who are employees responsible for assessing, managing, or addressing risk of critical safety incidents in the company. A frontier developer cannot prevent or retaliate against such employees for disclosing that the frontier developer’s activities pose a specific and substantial danger to public health or safety resulting from catastrophic risk, or violate the Act. A frontier developer must provide notice to employees regarding their whistleblower rights. A large frontier developer must also provide a reasonable internal process for employees to anonymously report to the large frontier developer, in good faith, information indicating either a specific and substantial danger to public health or safety resulting from a catastrophic risk or a violation of the Act. Section 4 allows a covered employee to bring a civil action for injunctive relief, in addition to any other remedies available under existing law.
1 Section 3 of the law creates a consortium that will develop a framework for a public cloud computing cluster known as “CalCompute.”