On December 19, New York Governor Kathy Hochul (D) signed the Responsible AI Safety & Education (“RAISE”) Act into law, making New York the second state in the nation to codify public safety disclosure and reporting requirements for developers of frontier AI models.  Prior to signing, Governor Hochul secured several commitments from the legislature to adopt updates (known as “chapter amendments”) to the RAISE Act to align its text with California’s Transparency in Frontier AI Act (“TFAIA”), significantly modifying the version passed by the legislature in June.  In a press release, Governor Hochul stated that the RAISE Act “builds on California’s recently adopted framework, creating a unified benchmark among the country’s leading tech states as the federal government lags behind.”

Frontier Developers.  Effective January 1, 2027, the RAISE Act adopts a regulatory framework aligned with that of the TFAIA, which imposes separate and overlapping requirements for (1) “frontier developers,” i.e., persons who have trained, or initiated the training of, a foundation model using a quantity of computing power greater than 1026 FLOPS (a “frontier model”), and (2) “large frontier developers,” i.e. frontier developers with annual gross revenues over $500 million.  Unlike TFAIA, however, the RAISE Act only applies to frontier models “developed, deployed, or operating in whole or in part” in New York, and exempts New York colleges and universities that “engag[e] in academic research regarding artificial intelligence models.” 

The RAISE Act does not expressly contemplate future modifications to these definitions.  By contrast, TFAIA will require California’s Department of Technology to provide “recommendations” to the California Legislature on “whether and how to update” its definitions of “frontier model,” “frontier developer,” and “large frontier developer.”

Frontier AI Frameworks.  The RAISE Act requires large frontier developers to write, implement, and publish “documented technical and organizational protocols to manage, assess, and mitigate catastrophic risks” (“frontier AI frameworks”).  Like TFAIA, the RAISE Act defines “catastrophic risk” as a foreseeable and material risk that a frontier developer’s development, storage, use, or deployment of a frontier model will materially contribute to death or serious injury to more than 50 people or more than $1 billion in property damage by: (1) providing expert-level assistance in creating or releasing a chemical, biological, radiological, or nuclear weapon, (2) engaging in unsupervised conduct that is a cyberattack, or that would constitute murder, assault, extortion, or theft if committed by a human, or (3) evading the control of its developer or user.  The RAISE Act requires developers’ frontier AI frameworks to address topics identical to those required by TFAIA, including how the developer incorporates national and international standards; applies thresholds for identifying and assessing whether a frontier model poses a catastrophic risk; reviews assessments and mitigations when deciding to deploy a frontier model or “use it extensively internally”; and institutes internal governance practices to ensure the implementation of the framework, among other topics. 

Although it appears substantively similar to TFAIA, the RAISE Act’s frontier AI framework requirement may be more expansive.  While TFAIA requires frontier AI frameworks to describe a developer’s “approach” to the topics above, the RAISE Act requires large frontier developers to describe how they “handle” these topics “in detail.”

Transparency Reports.  Like TFAIA, the RAISE Act requires frontier developers to publish “transparency reports” on their websites, or as part of larger documents such as system or model cards, before or when deploying new or substantially modified frontier models.  Frontier developers must disclose various categories of information about the frontier model’s uses and limitations in their transparency reports, and large frontier developers must include additional information about catastrophic risk assessments.

Critical Safety Incident Reporting.  Like TFAIA, the RAISE Act requires frontier developers to report “critical safety incidents,” i.e., death or bodily injury caused by unauthorized modification or exfiltration of model weights or loss of control, harm from the “materialization of a catastrophic risk,” or “materially increased catastrophic risk” from a frontier model’s use of “deceptive techniques” to subvert its controls.  However, the RAISE Act requires frontier developers to report critical safety incidents within 72 hours after either (1) determining that a critical safety incident “has occurred” or (2) “learning facts sufficient to establish a reasonable belief” that an incident occurred.  TFAIA, on the other hand, only requires developers to report critical safety incidents within 15 days of “discovering the incident.”  Both laws require developers to report critical safety incidents within 24 hours to appropriate authorities if the incident “poses an imminent risk of death or serious physical injury.”  

Internal Use Catastrophic Risk Assessment Reporting.  Like TFAIA, the RAISE Act requires large frontier developers to report “a summary of any assessment of catastrophic risk” resulting from the large frontier developer’s “internal use” of its frontier models every three months or “pursuant to another reasonable schedule” specified by the developer. 

Disclosure Statement & Assessment Fee.  Unlike TFAIA, the RAISE Act prohibits large frontier developers from developing, deploying, or operating a frontier model in New York unless the large frontier developer files a “disclosure statement” with a new “office” established by the RAISE Act within the New York’s Department of Financial Services (“DFS Office”).  The disclosure statement must include:

  • The identity of the large frontier developer and all names under which it conducts business;
  • The addresses of the large frontier developer’s principal place of business and offices in New York;
  • If the large frontier developer is a private company, a list of all “persons or entities that beneficially own a five percent or greater interest” in the large frontier developer, along with all persons who formerly owned such interest in the preceding five years;
  • If the large frontier developer is a publicly traded company, all persons or entities that beneficially own a 50 percent or greater interest in the large frontier developer; and
  • The large frontier developer’s primary, secondary, and tertiary points of contact for purposes of receiving RAISE Act inquiries.  

Large frontier developers must renew their disclosure statements once every two years, or if the “ownership of the frontier model is transferred” or there is a “material change to the information reported” in a prior disclosure statement.  In addition to the statement, large frontier developers must be “assessed in pro rata shares” by DFS “to defray the operating expenses” of RAISE Act implementation. 

DFS Office Implementation and Rulemaking.  In a significant departure from prior versions of the bill, the DFS Office created by the RAISE Act will be “tasked with implementation” of the law.  Similar to the role of California’s Office of Emergency Services under TFAIA, the RAISE Act’s DFS Office will be charged with receiving critical safety incident reports, summaries of assessments of catastrophic risk resulting from internal use, and biannual disclosure statements.  The DFS Office also will be required to “maintain and publish a list of large frontier developers who have filed disclosure statements.” 

The RAISE Act also grants the DFS Office broad rulemaking authority.  Specifically, the DFS Office is authorized to “adopt rules and regulations to implement” the RAISE Act’s provisions, including “additional reporting or publication requirements” such as “post-critical safety incident information, sharing plans and protocols, and the transmission of frontier AI frameworks to the office,” if such regulations will “facilitate safety and transparency consistent with the underlying purpose of” the RAISE Act.  TFAIA, by contrast, does not authorize any rulemaking powers to implement the law.

Enforcement and Safe Harbor.  A large frontier developer that violates the RAISE Act’s frontier AI framework and reporting requirements, or that “fails to comply with its own frontier AI framework,” faces civil penalties of up to $1 million for first violations and up to $3 million per subsequent violation, enforced by the New York Attorney General.  Additionally, a large frontier developer that fails to file a disclosure statement or pay the assessment fee, or that files a disclosure statement with false information, also faces civil penalties of $1,000 per day of non-compliance and an amount equal to any assessments owed, levied by the DFS Office after notice and hearing.  Like TFAIA, the RAISE Act does not expressly establish penalties for violations by frontier developers who are not large frontier developers. 

Both the RAISE Act and TFAIA establish safe harbors for frontier developers who comply with certain federal requirements, although the RAISE Act’s safe harbor only applies to its critical safety incident reporting requirement.  Specifically, frontier developers will be “deemed in compliance” if the developer complies with federal requirements or standards that the DFS Office designates as “substantially equivalent to, or stricter than,” the RAISE Act’s critical safety incident reporting requirement.  If a frontier developer declares their intent to comply with designated federal requirements, however, failure to comply with those requirements “shall constitute a violation” of the RAISE Act.  Additionally, such frontier developers must send “copies of any critical safety incident reports required by such federal standards” to the DFS Office “concurrently with sending them to federal authorities.”

The signing of the RAISE Act and Governor Hochul’s comments that the law creates a “unified benchmark” with California’s TFAIA for frontier AI model regulation “as the federal government lags behind,” comes just weeks after President Trump signed an Executive Order directing federal agencies to preempt or challenge state AI laws, including “onerous” state AI laws that “may compel AI developers or deployers to disclose or report information in a manner that would violate the First Amendment or any other provision of the Constitution.”  However, absent any comprehensive federal AI legislation or regulatory framework, it remains uncertain what impact, if any, the President’s order will have on implementing the RAISE Act, TFAIA, or other state AI rules.

Photo of Micaela McMurrough Micaela McMurrough

Micaela McMurrough serves as co-chair of Covington’s global and multi-disciplinary Technology Group, as co-chair of the Artificial Intelligence and Internet of Things (IoT) initiative. In her practice, she has represented clients in high-stakes antitrust, patent, trade secrets, contract, and securities litigation, and other…

Micaela McMurrough serves as co-chair of Covington’s global and multi-disciplinary Technology Group, as co-chair of the Artificial Intelligence and Internet of Things (IoT) initiative. In her practice, she has represented clients in high-stakes antitrust, patent, trade secrets, contract, and securities litigation, and other complex commercial litigation matters, and she regularly represents and advises domestic and international clients on cybersecurity and data privacy issues, including cybersecurity investigations and cyber incident response. Micaela has advised clients on data breaches and other network intrusions, conducted cybersecurity investigations, and advised clients regarding evolving cybersecurity regulations and cybersecurity norms in the context of international law.

In 2016, Micaela was selected as one of thirteen Madison Policy Forum Military-Business Cybersecurity Fellows. She regularly engages with government, military, and business leaders in the cybersecurity industry in an effort to develop national strategies for complex cyber issues and policy challenges. Micaela previously served as a United States Presidential Leadership Scholar, principally responsible for launching a program to familiarize federal judges with various aspects of the U.S. national security structure and national intelligence community.

Prior to her legal career, Micaela served in the Military Intelligence Branch of the United States Army. She served as Intelligence Officer of a 1,200-member maneuver unit conducting combat operations in Afghanistan and was awarded the Bronze Star.

Photo of Matthew Shapanka Matthew Shapanka

Matthew Shapanka practices at the intersection of law, policy, and politics, developing strategies to guide businesses facing complex legislative, regulatory, and investigative matters. Matt draws on more than 15 years of experience across Capitol Hill, private practice, state government, and political campaigns to…

Matthew Shapanka practices at the intersection of law, policy, and politics, developing strategies to guide businesses facing complex legislative, regulatory, and investigative matters. Matt draws on more than 15 years of experience across Capitol Hill, private practice, state government, and political campaigns to advise clients on leading-edge policy issues involving artificial intelligence, semiconductors, connected and autonomous vehicles, and other critical and emerging technologies.

Matt works with clients to develop and execute complex public policy initiatives that involve legal, political, and reputational risks. He regularly assists clients to:

Develop public policy strategies
Draft federal and state legislation and regulations
Analyze legislation, regulations, and other government initiatives
Craft testimony, regulatory comments, fact sheets, letters and other advocacy materials
Prepare company executives and other witnesses to testify before Congress, state legislatures, and regulatory bodies
Represent clients before Congress, the White House, federal agencies, state legislatures, and state regulatory agencies
Build and manage policy advocacy coalitions

He advises clients across multiple policy areas, including matters involving regulation of critical and emerging technologies like artificial intelligence, connected and autonomous vehicles, and semiconductors; national security; intellectual property; antitrust; financial services technologies (“fintech”); food and beverage regulation; COVID-19 pandemic response and recovery; and election administration and campaign finance.

Matt rejoined Covington after serving as Chief Counsel for the U.S. Senate Committee on Rules and Administration, where he advised Chairwoman Amy Klobuchar (D-MN) on all legal, policy, and oversight matters before the Committee. Most significantly, Matt staffed the Committee in passing the Electoral Count Reform Act – a landmark bipartisan law that updates the procedures for certifying and counting votes in presidential elections—and the Committee’s bipartisan joint investigation (with the Homeland Security Committee) into the security planning and response to the January 6, 2021 attack on the Capitol.

Both in Congress and at Covington, Matt has prepared dozens of corporate and nonprofit executives, academics, government officials, and presidential nominees for testimony at congressional committee hearings and depositions. He is a skilled legislative drafter who has composed dozens of bills and amendments introduced in Congress and state legislatures, including several that have been enacted into law across multiple policy areas. Matt also leads the firm’s state policy practice, advising clients on complex multistate legislative and regulatory matters and managing state-level advocacy efforts.

In addition to his policy work, as a member of Covington’s nationally recognized (Chambers Band 1) Election and Political Law Practice Group, Matt advises and represents clients on the full range of political law compliance and enforcement matters, including:

Federal election, campaign finance, lobbying, and government ethics laws
The Securities and Exchange Commission’s “Pay-to-Play” rule
Election and political laws of states and municipalities across the country

Before law school, Matt served in the administration of former Governor Deval Patrick (D-MA), where he worked on policy, communications, and compliance matters for federal economic recovery funding awarded to the state. He has also staffed federal, state, and local political candidates in Massachusetts and New Hampshire.

Photo of Andrew Longhi Andrew Longhi

Andrew Longhi advises national and multinational companies across industries on a wide range of regulatory, compliance, and enforcement matters involving data privacy, telecommunications, and emerging technologies.

Andrew’s practice focuses on advising clients on how to navigate the rapidly evolving legal landscape of state…

Andrew Longhi advises national and multinational companies across industries on a wide range of regulatory, compliance, and enforcement matters involving data privacy, telecommunications, and emerging technologies.

Andrew’s practice focuses on advising clients on how to navigate the rapidly evolving legal landscape of state, federal, and international data protection laws. He proactively counsels clients on the substantive requirements introduced by new laws and shifting enforcement priorities. In particular, Andrew routinely supports clients in their efforts to launch new products and services that implicate the laws governing the use of data, connected devices, biometrics, and telephone and email marketing.

Andrew assesses privacy and cybersecurity risk as a part of diligence in complex corporate transactions where personal data is a key asset or data processing issues are otherwise material. He also provides guidance on generative AI issues, including privacy, Section 230, age-gating, product liability, and litigation risk, and has drafted standards and guidelines for large-language machine-learning models to follow. Andrew focuses on providing risk-based guidance that can keep pace with evolving legal frameworks.

Photo of August Gweon August Gweon

August Gweon counsels national and multinational companies on new regulatory frameworks governing artificial intelligence, robotics, and other emerging technologies, digital services, and digital infrastructure. August leverages his AI and technology policy experiences to help clients understand AI industry developments, emerging risks, and policy…

August Gweon counsels national and multinational companies on new regulatory frameworks governing artificial intelligence, robotics, and other emerging technologies, digital services, and digital infrastructure. August leverages his AI and technology policy experiences to help clients understand AI industry developments, emerging risks, and policy and enforcement trends. He regularly advises clients on AI governance, risk management, and compliance under data privacy, consumer protection, safety, procurement, and platform laws.

August’s practice includes providing comprehensive advice on U.S. state and federal AI policies and legislation, including the Colorado AI Act and state laws regulating automated decision-making technologies, AI-generated content, generative AI systems and chatbots, and foundation models. He also assists clients in assessing risks and compliance under federal and state privacy laws like the California Privacy Rights Act, responding to government inquiries and investigations, and engaging in AI public policy advocacy and rulemaking.