On September 29, California Governor Gavin Newsom (D) signed into law SB 53, the Transparency in Frontier Artificial Intelligence Act (“TFAIA”), establishing public safety regulations for developers of “frontier models,” or large foundation AI models trained using massive amounts of computing power.  TFAIA is the first frontier model safety legislation in the country to become law.  In his signing statement, Governor Newsom stated that that TFAIA will “provide a blueprint for well-balanced AI policies beyond [California’s] borders – especially in the absence of a comprehensive federal AI policy framework and national AI safety standards.”  TFAIA largely adopts the recommendations of the Joint California Policy Working Group on AI Frontier Models, which released its final report on frontier AI policy in June.

Frontier Developers.  Effective January 1, 2026, TFAIA will apply to “frontier developers” who have trained, or initiated the training of, a foundation model using a quantity of computing power greater than 1026 FLOPS (a “frontier model”), with additional requirements for frontier developers with annual gross revenues exceeding $500 million (“large frontier developers”).  Notably, starting on January 1, 2027, TFAIA will require the California Department of Technology to annually provide recommendations to the Legislature on “whether and how to update” TFAIA’s definitions of “frontier model,” “frontier developer,” and “large frontier developer” to “ensure that they accurately reflect technological developments, scientific literature, and widely accepted national and international standards.”  Below we describe key obligations and restrictions imposed by TFAIA on such developers.   

Frontier AI Frameworks.  TFAIA will require a large frontier developer to create, implement, and publish a “frontier AI framework,” which is defined as “documented technical and organizational protocols to manage, assess, and mitigate catastrophic risks.”  Such frameworks must explain the developer’s approaches to:

  • Integration of Standards:  Incorporating “national standards, international standards, and industry-consensus best practices.”
  • Risk Thresholds and Mitigation:  Defining and assessing “thresholds used … to identify and assess whether a frontier model has capabilities that could pose a catastrophic risk” and applying “mitigations to address the potential for catastrophic risks” based on those assessments.
  • Pre-Deployment Assessments:  Reviewing assessments and the adequacy of mitigations before deploying a frontier model externally or for “extensive[] internal[]” use, and using third parties to assess catastrophic risks and mitigations,
  • Framework Maintenance:  Revisiting and updating the developer’s frontier AI framework, including criteria for triggering such updates, and defining when models are “substantially modified enough to require” publishing transparency reports required by TFAIA (described further below).
  • Security and Incident Response:  Implementing “cybersecurity practices to secure unreleased model weights” and processes for “identifying and responding to critical safety incidents.”
  • Internal Use Risk Management:  Assessing and managing “catastrophic risk resulting from the internal use” of the developer’s frontier model, including risks resulting from the model “circumventing oversight mechanisms.”

Large frontier developers must review and update their Frontier AI Frameworks at least annually and publish any “material modifications” with a justification within 30 days if such a modification is made.

Transparency Reports.  Before or when deploying a new or a substantially modified frontier model, frontier developers and large frontier developers will be required to publish “transparency reports” on their websites or as part of larger documents such as “system cards” or “model cards.”  Frontier developer transparency reports must include the developer’s website, a “mechanism that enables a natural person to communicate” with the developer, the frontier model’s release date, supported languages, output modalities, and intended uses, and any “generally applicable restrictions or conditions on uses” of the frontier model.

In addition to these requirements, large frontier developers’ transparency reports must also summarize catastrophic risk assessments conducted pursuant to the large frontier developer’s frontier AI framework, the results of those assessments, any involvement by “third-party evaluators” in assessing catastrophic risk, and any “other steps taken to fulfill the requirements” of the large frontier developer’s frontier AI framework with respect to the frontier model.

Frontier developers may make redactions to transparency reports in order to protect “trade secrets, the frontier developer’s cybersecurity, public safety, or the national security of the United States or to comply with any federal or state law.”

Critical Safety Incident Reporting.  TFAIA will require frontier developers to report “critical safety incidents,” which are defined as any “unauthorized access to, modification of, or exfiltration of” model weights causing death or injury, “harm resulting from the materialization of a catastrophic risk,” “loss of control . . . causing death or bodily injury,” or a model “us[ing] deceptive techniques” to subvert its controls “in a manner that demonstrates materially increased catastrophic risk.”

Frontier developers are required to report such incidents within 15 days to the California Office of Emergency Services (“OES”) or, if a critical safety incident “poses an imminent risk of death or serious physical injury,” within 24 hours to an appropriate authority, including “any law enforcement agency or public safety agency with jurisdiction.”  Critical safety incident reports must be provided through a mechanism established by OES, and must include the date of the incident, reasons why the incident qualifies as a critical safety incident, a short and plain statement describing the incident, and whether the incident was “associated with internal use of a frontier model.” 

Catastrophic Risk Assessment Reporting.  TFAIA also will require large frontier developers to report to OES “a summary of any assessment of catastrophic risk” resulting from the large frontier developer’s “internal use” of any of its frontier models (while “internal use” is undefined, TFAIA may be referring to updates or modifications to a frontier model).  Large frontier developers must provide a summary of any such assessment to OES every three months or “pursuant to another reasonable schedule” specified by the developer and shared with OES.  However, TFAIA does not expressly require large frontier developers to conduct assessments of catastrophic risk or prohibit the deployment of frontier models that may present catastrophic risks.

TFAIA defines “catastrophic risks” as foreseeable and material risks that a frontier developer’s development, storage, use, or deployment of a frontier model will materially contribute to death or serious injury to more than 50 people or more than $1 billion in property damage by: (1) providing expert-level assistance in creating or releasing a chemical, biological, radiological, or nuclear weapon, (2) engaging in a cyberattack, or conduct that would constitute murder, assault, extortion, or theft if committed by a human, without human oversight, or (3) evading the control of its developer or user.

Whistleblower Protections.  TFAIA will prohibit frontier developers from making or enforcing “a rule, regulation, policy, or contract” that prevents any employee responsible for managing critical safety risks (a “covered employee”) from disclosing, or retaliates against a covered employee for disclosing, information to authorities or supervisors if the employee has “reasonable cause to believe” the information shows that:  (1) the frontier developer’s activities “pose a specific and substantial danger to the public health or safety resulting from a catastrophic risk,” or (2) the frontier developer has violated TFAIA.  Frontier developers will also be required to provide clear notices to covered employees of their rights and responsibilities under TFAIA, among other things.  Additionally, large frontier developers will be required to provide a “reasonable internal process” for covered employees to anonymously disclose the types of information above.

Enforcement.  A large frontier developer that violates TFAIA’s disclosure and reporting requirements, or that “fails to comply with its own frontier AI framework,” will be subject to civil penalties of up to $1 million per violation, enforced by the California Attorney General.  TFAIA does not expressly establish penalties for violations of disclosure and reporting requirements by frontier developers who are not large frontier developers.  Covered employees may bring civil actions for violations of TFAIA’s whistleblower protections described above and may seek injunctive relief and attorney’s fees.

TFAIA provides a safe harbor from its disclosure and reporting requirements for frontier developers who comply with certain federal requirements intended to assess, detect, or mitigate catastrophic risks associated with frontier models.  Specifically, frontier developers will be “deemed in compliance” with TFAIA’s disclosure and reporting requirements to the extent that the developer complies with federal requirements or standards that OES designates as “substantially equivalent to, or stricter than,” TFAIA’s requirements.  If a frontier developer declares their intent to comply with designated federal requirements, however, failure to comply with those requirements “shall constitute a violation” of TFAIA.  In a potential nod to recent efforts in Congress to enjoin the enforcement of state AI laws, and echoing calls for a national AI regulatory framework from lawmakers in other states, Governor Newsom’s signing statement highlighted the safe harbor as a “compliance pathway” that will “provide alignment” with any future “national AI standards that maintain or exceed the protections in this bill.” 

Frontier AI Model Safety Legislation: TFAIA vs. RAISE Act.  The signing of TFAIA comes exactly one year after Governor Newsom vetoed the Safe & Secure Innovation for Frontier AI Models Act (SB 1047), a 2024 frontier model safety bill that would have imposed broader developer requirements, including third-party safety audits and “full shutdown” safeguards. 

TFAIA’s signing also follows the New York legislature’s passage of the Responsible AI Safety & Education (“RAISE”) Act, a frontier model public safety bill, in June.  Unlike TFAIA, the RAISE Act – which was passed by the legislature but has yet to be signed by New York Governor Kathy Hochul (D) – defines “frontier model” as an AI model that costs over $100 million in compute costs to train, in addition to being trained on more than 1026 FLOPS.  Additionally, the RAISE Act lacks whistleblower protections and, in contrast to TFAIA’s focus on reporting and disclosure requirements, also would require frontier model developers to implement “appropriate safeguards” prior to deploying a frontier model and prohibit developers from deploying frontier models that create an unreasonable risk of “critical harm.”

Photo of Micaela McMurrough Micaela McMurrough

Micaela McMurrough serves as co-chair of Covington’s global and multi-disciplinary Technology Group, as co-chair of the Artificial Intelligence and Internet of Things (IoT) initiative. In her practice, she has represented clients in high-stakes antitrust, patent, trade secrets, contract, and securities litigation, and other…

Micaela McMurrough serves as co-chair of Covington’s global and multi-disciplinary Technology Group, as co-chair of the Artificial Intelligence and Internet of Things (IoT) initiative. In her practice, she has represented clients in high-stakes antitrust, patent, trade secrets, contract, and securities litigation, and other complex commercial litigation matters, and she regularly represents and advises domestic and international clients on cybersecurity and data privacy issues, including cybersecurity investigations and cyber incident response. Micaela has advised clients on data breaches and other network intrusions, conducted cybersecurity investigations, and advised clients regarding evolving cybersecurity regulations and cybersecurity norms in the context of international law.

In 2016, Micaela was selected as one of thirteen Madison Policy Forum Military-Business Cybersecurity Fellows. She regularly engages with government, military, and business leaders in the cybersecurity industry in an effort to develop national strategies for complex cyber issues and policy challenges. Micaela previously served as a United States Presidential Leadership Scholar, principally responsible for launching a program to familiarize federal judges with various aspects of the U.S. national security structure and national intelligence community.

Prior to her legal career, Micaela served in the Military Intelligence Branch of the United States Army. She served as Intelligence Officer of a 1,200-member maneuver unit conducting combat operations in Afghanistan and was awarded the Bronze Star.

Photo of Matthew Shapanka Matthew Shapanka

Matthew Shapanka practices at the intersection of law, policy, and politics. He advises clients before Congress, state legislatures, and government agencies, helping businesses to navigate complex legislative, regulatory, and investigations matters, mitigate their legal, political, and reputational risks, and capture business opportunities.

Drawing…

Matthew Shapanka practices at the intersection of law, policy, and politics. He advises clients before Congress, state legislatures, and government agencies, helping businesses to navigate complex legislative, regulatory, and investigations matters, mitigate their legal, political, and reputational risks, and capture business opportunities.

Drawing on more than 15 years of experience on Capitol Hill and in private practice, state government, and political campaigns, Matt develops and executes complex, multifaceted public policy initiatives for clients seeking actions by Congress, state legislatures, and federal and state government agencies. He regularly counsels and represents businesses in legislative and regulatory matters involving intellectual property, national security, regulation of critical and emerging technologies like artificial intelligence, connected and autonomous vehicles, and other tech policy issues. He also represents clients facing congressional investigations or inquiries across a range of committees and subject matters.

Matt rejoined Covington after serving as Chief Counsel for the U.S. Senate Committee on Rules and Administration, where he advised Chairwoman Amy Klobuchar (D-MN) on all legal, policy, and oversight matters before the Committee, particularly federal election and campaign finance law, Federal Election Commission nominations, and oversight of the legislative branch. Most significantly, Matt led the Committee’s staff work on the Electoral Count Reform Act – a landmark bipartisan law that updates the procedures for certifying and counting votes in presidential elections—and the Committee’s bipartisan joint investigation (with the Homeland Security Committee) into the security planning and response to the January 6th attack.

Both in Congress and at Covington, Matt has prepared dozens of corporate and nonprofit executives, academics, government officials, and presidential nominees for testimony at congressional committee hearings and depositions. He is a skilled legislative drafter who has composed dozens of bills and amendments introduced in Congress and state legislatures, including several that have been enacted into law across multiple policy areas. Matt also leads the firm’s state policy practice, advising clients on complex multistate legislative and regulatory matters and managing state-level advocacy efforts.

In addition to his policy work, Matt advises and represents clients on the full range of political law compliance and enforcement matters involving federal election, campaign finance, lobbying, and government ethics laws, the Securities and Exchange Commission’s “Pay-to-Play” rule, and the election and political laws of states and municipalities across the country.

Before law school, Matt served in the administration of former Governor Deval Patrick (D-MA) as a research analyst in the Massachusetts Recovery & Reinvestment Office, where he worked on policy, communications, and compliance matters for federal economic recovery funding awarded to the state. He has also staffed federal, state, and local political candidates in Massachusetts and New Hampshire.

Photo of Andrew Longhi Andrew Longhi

Andrew Longhi advises national and multinational companies across industries on a wide range of regulatory, compliance, and enforcement matters involving data privacy, telecommunications, and emerging technologies.

Andrew’s practice focuses on advising clients on how to navigate the rapidly evolving legal landscape of state…

Andrew Longhi advises national and multinational companies across industries on a wide range of regulatory, compliance, and enforcement matters involving data privacy, telecommunications, and emerging technologies.

Andrew’s practice focuses on advising clients on how to navigate the rapidly evolving legal landscape of state, federal, and international data protection laws. He proactively counsels clients on the substantive requirements introduced by new laws and shifting enforcement priorities. In particular, Andrew routinely supports clients in their efforts to launch new products and services that implicate the laws governing the use of data, connected devices, biometrics, and telephone and email marketing.

Andrew assesses privacy and cybersecurity risk as a part of diligence in complex corporate transactions where personal data is a key asset or data processing issues are otherwise material. He also provides guidance on generative AI issues, including privacy, Section 230, age-gating, product liability, and litigation risk, and has drafted standards and guidelines for large-language machine-learning models to follow. Andrew focuses on providing risk-based guidance that can keep pace with evolving legal frameworks.

Photo of August Gweon August Gweon

August Gweon counsels national and multinational companies on new regulatory frameworks governing artificial intelligence, robotics, and other emerging technologies, digital services, and digital infrastructure. August leverages his AI and technology policy experiences to help clients understand AI industry developments, emerging risks, and policy…

August Gweon counsels national and multinational companies on new regulatory frameworks governing artificial intelligence, robotics, and other emerging technologies, digital services, and digital infrastructure. August leverages his AI and technology policy experiences to help clients understand AI industry developments, emerging risks, and policy and enforcement trends. He regularly advises clients on AI governance, risk management, and compliance under data privacy, consumer protection, safety, procurement, and platform laws.

August’s practice includes providing comprehensive advice on U.S. state and federal AI policies and legislation, including the Colorado AI Act and state laws regulating automated decision-making technologies, AI-generated content, generative AI systems and chatbots, and foundation models. He also assists clients in assessing risks and compliance under federal and state privacy laws like the California Privacy Rights Act, responding to government inquiries and investigations, and engaging in AI public policy advocacy and rulemaking.