On June 12, the New York legislature passed the Responsible AI Safety & Education (“RAISE”) Act (S 6953), a frontier model public safety bill that would establish safeguard, reporting, disclosure, and other requirements for large developers of frontier AI models.  If signed into law by Governor Kathy Hochul (D), the RAISE Act would make New York the first state in the nation to enact public safety regulations for frontier model developers and would impose substantial fines on model developers in scope. 

The bill, which passed the New York Senate on a 58-1-4 vote and the New York Assembly on a 119-22 vote, advances a similar purpose to California’s Safe & Secure Innovation for Frontier AI Models Act (SB 1047), an AI safety bill that was vetoed by California Governor Gavin Newsom (D) in 2024.  The RAISE Act’s substantive provisions, however, are narrower than SB 1047.  For example, the RAISE Act does not include third-party independent auditing requirements or whistleblower protections for employees.  Following the bill’s passage, New York State Senator and bill co-sponsor Andrew Gounardes stated that the bill would “ensure[] AI can flourish,” while requiring “reasonable, commonsense safeguard[s] we’d expect of any company working on a potentially dangerous product.”

Covered Models and Developers.  The RAISE Act solely addresses “frontier models,” defined as an AI model that is either (1) trained on more than 1026 FLOPS and costing more than $100 million in compute or (2) produced by applying “knowledge distillation” to a frontier model and costing more than $5 million in compute.  The bill’s obligations would apply to “large developers” of frontier models, i.e., persons that have trained at least one frontier model and have spent more than $100 million in aggregate compute costs to train frontier models.

Critical Harms and Safety Incidents.  Similar to SB 1047, the RAISE Act’s requirements focus on reducing risks of “critical harm.”  The bill defines “critical harm” as death or serious injury to 100 or more people, or at least $1 billion in damage to rights in money or property, caused or materially enabled by a large developer’s use, storage, or release of a frontier model through (1) the creation or use of chemical, biological radiological, or nuclear weapon or (2) the AI model engaging in conduct (i) with no meaningful human intervention and (2) that would constitute a crime under the New York Penal Code that requires intent, recklessness, gross negligence, or the solicitation or aiding and abetting, if committed by a human. 

Safety Incident Reporting. The bill also would require large developers to report “safety incidents” affecting their frontier models within 72 hours after a safety incident occurred or facts establish a “reasonable belief” that a safety incident occurred to the New York Attorney General and New York Division of Homeland Security and Emergency Services.  The bill defines “safety incident” as (1) a known incidence of a “critical harm” or (2) circumstances that provide “demonstrable evidence of an increased risk of critical harm” resulting from an incident of frontier model autonomous behavior other than at the request of the user, unauthorized release or access, critical failure of technical or administrative controls, or unauthorized use. 

Pre-Deployment Safeguards.  Prior to deploying a frontier model, large developers would be required to implement “appropriate safeguards” to prevent unreasonable risk of critical harm.  After development, large developers would be prohibited from deploying frontier models that create an unreasonable risk of critical harm, or from making false or materially misleading statements or omissions related to documents retained under the Act.

Pre-Deployment Documentation and Disclosure Requirements.  The RAISE Act would also impose several documentation and disclosure requirements on large developers prior to deploying a frontier model, including:

  • Safety and Security Protocols.  Large developers would be required to implement, publish, and annually review a written “safety and security protocol” that describes the developer’s (1) procedures and protections to reduce risks of critical harm; (2) cybersecurity protections that reduce risks of unauthorized access or misuse; (3) testing procedures for evaluating unreasonable risks of critical harm or misuse; and (4) senior personnel responsible for ensuring compliance.
  • Documentation.  Large developers would be required to retain an unredacted copy of their safety and security protocols, records of updates and revisions, and information on specific frontier model tests and test results or information sufficient for third parties to replicate testing procedures for as long as the frontier model is deployed, plus five years.
  • Disclosure.  Large developers would be required to disclose copies of their safety and security protocols, with appropriate redactions, to the New York Attorney General and New York Division of Homeland Security and Emergency Services, and to provide access to the safety and security protocol with redactions limited to those required by federal law, upon request.

The RAISE Act omits the third-party auditing requirements and whistleblower protections that were cornerstones of the vetoed California SB 1047 proposal.  On June 17, the Joint California Policy Working Group on AI Frontier Models released the final version of its report on Frontier AI Policy, recommending that frontier model regulations incorporate third-party risk assessments and whistleblower protections, in addition to public-facing transparency requirements and adverse event reporting.

Enforcement.  The Act would be enforced by civil actions brought by the New York Attorney General.  Violations would be punishable by up to $10 million in civil penalties for first violations and up to $30 million for subsequent violations, in addition to injunctive or declaratory relief.  The Act does not create a private right of action.

Under New York Senate rules, the RAISE Act must be delivered to the Governor within 45 days from the date of passage – by July 27, 2025.  Governor Hochul will then have 30 days to sign or veto the bill.  If enacted, the RAISE Act would come into effect 90 days after it is signed into law.  

*              *              *

We will continue to provide updates on meaningful developments related to artificial intelligence and technology across our Inside Global Tech, Global Policy Watch, and Inside Privacy blogs.

Photo of Jennifer Johnson Jennifer Johnson

Jennifer Johnson is a partner specializing in communications, media and technology matters who serves as Co-Chair of Covington’s Technology Industry Group and its global and multi-disciplinary Artificial Intelligence (AI) and Internet of Things (IoT) Groups. She represents and advises technology companies, content distributors…

Jennifer Johnson is a partner specializing in communications, media and technology matters who serves as Co-Chair of Covington’s Technology Industry Group and its global and multi-disciplinary Artificial Intelligence (AI) and Internet of Things (IoT) Groups. She represents and advises technology companies, content distributors, television companies, trade associations, and other entities on a wide range of media and technology matters. Jennifer has three decades of experience advising clients in the communications, media and technology sectors, and has held leadership roles in these practices for more than twenty years. On technology issues, she collaborates with Covington’s global, multi-disciplinary team to assist companies navigating the complex statutory and regulatory constructs surrounding this evolving area, including product counseling and technology transactions related to connected and autonomous vehicles, internet connected devices, artificial intelligence, smart ecosystems, and other IoT products and services. Jennifer serves on the Board of Editors of The Journal of Robotics, Artificial Intelligence & Law.

Jennifer assists clients in developing and pursuing strategic business and policy objectives before the Federal Communications Commission (FCC) and Congress and through transactions and other business arrangements. She regularly advises clients on FCC regulatory matters and advocates frequently before the FCC. Jennifer has extensive experience negotiating content acquisition and distribution agreements for media and technology companies, including program distribution agreements, network affiliation and other program rights agreements, and agreements providing for the aggregation and distribution of content on over-the-top app-based platforms. She also assists investment clients in structuring, evaluating, and pursuing potential investments in media and technology companies.

Photo of Micaela McMurrough Micaela McMurrough

Micaela McMurrough serves as co-chair of Covington’s global and multi-disciplinary Technology Group, as co-chair of the Artificial Intelligence and Internet of Things (IoT) initiative. In her practice, she has represented clients in high-stakes antitrust, patent, trade secrets, contract, and securities litigation, and other…

Micaela McMurrough serves as co-chair of Covington’s global and multi-disciplinary Technology Group, as co-chair of the Artificial Intelligence and Internet of Things (IoT) initiative. In her practice, she has represented clients in high-stakes antitrust, patent, trade secrets, contract, and securities litigation, and other complex commercial litigation matters, and she regularly represents and advises domestic and international clients on cybersecurity and data privacy issues, including cybersecurity investigations and cyber incident response. Micaela has advised clients on data breaches and other network intrusions, conducted cybersecurity investigations, and advised clients regarding evolving cybersecurity regulations and cybersecurity norms in the context of international law.

In 2016, Micaela was selected as one of thirteen Madison Policy Forum Military-Business Cybersecurity Fellows. She regularly engages with government, military, and business leaders in the cybersecurity industry in an effort to develop national strategies for complex cyber issues and policy challenges. Micaela previously served as a United States Presidential Leadership Scholar, principally responsible for launching a program to familiarize federal judges with various aspects of the U.S. national security structure and national intelligence community.

Prior to her legal career, Micaela served in the Military Intelligence Branch of the United States Army. She served as Intelligence Officer of a 1,200-member maneuver unit conducting combat operations in Afghanistan and was awarded the Bronze Star.

Photo of Jayne Ponder Jayne Ponder

Jayne Ponder provides strategic advice to national and multinational companies across industries on existing and emerging data privacy, cybersecurity, and artificial intelligence laws and regulations.

Jayne’s practice focuses on helping clients launch and improve products and services that involve laws governing data privacy…

Jayne Ponder provides strategic advice to national and multinational companies across industries on existing and emerging data privacy, cybersecurity, and artificial intelligence laws and regulations.

Jayne’s practice focuses on helping clients launch and improve products and services that involve laws governing data privacy, artificial intelligence, sensitive data and biometrics, marketing and online advertising, connected devices, and social media. For example, Jayne regularly advises clients on the California Consumer Privacy Act, Colorado AI Act, and the developing patchwork of U.S. state data privacy and artificial intelligence laws. She advises clients on drafting consumer notices, designing consent flows and consumer choices, drafting and negotiating commercial terms, building consumer rights processes, and undertaking data protection impact assessments. In addition, she routinely partners with clients on the development of risk-based privacy and artificial intelligence governance programs that reflect the dynamic regulatory environment and incorporate practical mitigation measures.

Jayne routinely represents clients in enforcement actions brought by the Federal Trade Commission and state attorneys general, particularly in areas related to data privacy, artificial intelligence, advertising, and cybersecurity. Additionally, she helps clients to advance advocacy in rulemaking processes led by federal and state regulators on data privacy, cybersecurity, and artificial intelligence topics.

As part of her practice, Jayne also advises companies on cybersecurity incident preparedness and response, including by drafting, revising, and testing incident response plans, conducting cybersecurity gap assessments, engaging vendors, and analyzing obligations under breach notification laws following an incident.

Jayne maintains an active pro bono practice, including assisting small and nonprofit entities with data privacy topics and elder estate planning.

Photo of August Gweon August Gweon

August Gweon counsels national and multinational companies on data privacy, cybersecurity, antitrust, and technology policy issues, including issues related to artificial intelligence and other emerging technologies. August leverages his experiences in AI and technology policy to help clients understand complex technology developments, risks…

August Gweon counsels national and multinational companies on data privacy, cybersecurity, antitrust, and technology policy issues, including issues related to artificial intelligence and other emerging technologies. August leverages his experiences in AI and technology policy to help clients understand complex technology developments, risks, and policy trends.

August regularly provides advice to clients on privacy and competition frameworks and AI regulations, with an increasing focus on U.S. state AI legislative developments and trends related to synthetic content, automated decision-making, and generative AI. He also assists clients in assessing federal and state privacy regulations like the California Privacy Rights Act, responding to government inquiries and investigations, and engaging in public policy discussions and rulemaking processes.

Photo of Analese Bridges Analese Bridges

Analese Bridges is an associate in the firm’s Washington, DC office and a member of the Data Privacy and Cybersecurity and Advertising and Consumer Protection Practice Groups. She represents and advises clients on a range of cybersecurity, data privacy, and consumer protection issues…

Analese Bridges is an associate in the firm’s Washington, DC office and a member of the Data Privacy and Cybersecurity and Advertising and Consumer Protection Practice Groups. She represents and advises clients on a range of cybersecurity, data privacy, and consumer protection issues, including cyber and data security incident response and preparedness, cross-border privacy law, government and internal investigations, and regulatory compliance.