On June 12, the New York legislature passed the Responsible AI Safety & Education (“RAISE”) Act (S 6953), a frontier model public safety bill that would establish safeguard, reporting, disclosure, and other requirements for large developers of frontier AI models. If signed into law by Governor Kathy Hochul (D), the RAISE Act would make New York the first state in the nation to enact public safety regulations for frontier model developers and would impose substantial fines on model developers in scope.
The bill, which passed the New York Senate on a 58-1-4 vote and the New York Assembly on a 119-22 vote, advances a similar purpose to California’s Safe & Secure Innovation for Frontier AI Models Act (SB 1047), an AI safety bill that was vetoed by California Governor Gavin Newsom (D) in 2024. The RAISE Act’s substantive provisions, however, are narrower than SB 1047. For example, the RAISE Act does not include third-party independent auditing requirements or whistleblower protections for employees. Following the bill’s passage, New York State Senator and bill co-sponsor Andrew Gounardes stated that the bill would “ensure[] AI can flourish,” while requiring “reasonable, commonsense safeguard[s] we’d expect of any company working on a potentially dangerous product.”
Covered Models and Developers. The RAISE Act solely addresses “frontier models,” defined as an AI model that is either (1) trained on more than 1026 FLOPS and costing more than $100 million in compute or (2) produced by applying “knowledge distillation” to a frontier model and costing more than $5 million in compute. The bill’s obligations would apply to “large developers” of frontier models, i.e., persons that have trained at least one frontier model and have spent more than $100 million in aggregate compute costs to train frontier models.
Critical Harms and Safety Incidents. Similar to SB 1047, the RAISE Act’s requirements focus on reducing risks of “critical harm.” The bill defines “critical harm” as death or serious injury to 100 or more people, or at least $1 billion in damage to rights in money or property, caused or materially enabled by a large developer’s use, storage, or release of a frontier model through (1) the creation or use of chemical, biological radiological, or nuclear weapon or (2) the AI model engaging in conduct (i) with no meaningful human intervention and (2) that would constitute a crime under the New York Penal Code that requires intent, recklessness, gross negligence, or the solicitation or aiding and abetting, if committed by a human.
Safety Incident Reporting. The bill also would require large developers to report “safety incidents” affecting their frontier models within 72 hours after a safety incident occurred or facts establish a “reasonable belief” that a safety incident occurred to the New York Attorney General and New York Division of Homeland Security and Emergency Services. The bill defines “safety incident” as (1) a known incidence of a “critical harm” or (2) circumstances that provide “demonstrable evidence of an increased risk of critical harm” resulting from an incident of frontier model autonomous behavior other than at the request of the user, unauthorized release or access, critical failure of technical or administrative controls, or unauthorized use.
Pre-Deployment Safeguards. Prior to deploying a frontier model, large developers would be required to implement “appropriate safeguards” to prevent unreasonable risk of critical harm. After development, large developers would be prohibited from deploying frontier models that create an unreasonable risk of critical harm, or from making false or materially misleading statements or omissions related to documents retained under the Act.
Pre-Deployment Documentation and Disclosure Requirements. The RAISE Act would also impose several documentation and disclosure requirements on large developers prior to deploying a frontier model, including:
- Safety and Security Protocols. Large developers would be required to implement, publish, and annually review a written “safety and security protocol” that describes the developer’s (1) procedures and protections to reduce risks of critical harm; (2) cybersecurity protections that reduce risks of unauthorized access or misuse; (3) testing procedures for evaluating unreasonable risks of critical harm or misuse; and (4) senior personnel responsible for ensuring compliance.
- Documentation. Large developers would be required to retain an unredacted copy of their safety and security protocols, records of updates and revisions, and information on specific frontier model tests and test results or information sufficient for third parties to replicate testing procedures for as long as the frontier model is deployed, plus five years.
- Disclosure. Large developers would be required to disclose copies of their safety and security protocols, with appropriate redactions, to the New York Attorney General and New York Division of Homeland Security and Emergency Services, and to provide access to the safety and security protocol with redactions limited to those required by federal law, upon request.
The RAISE Act omits the third-party auditing requirements and whistleblower protections that were cornerstones of the vetoed California SB 1047 proposal. On June 17, the Joint California Policy Working Group on AI Frontier Models released the final version of its report on Frontier AI Policy, recommending that frontier model regulations incorporate third-party risk assessments and whistleblower protections, in addition to public-facing transparency requirements and adverse event reporting.
Enforcement. The Act would be enforced by civil actions brought by the New York Attorney General. Violations would be punishable by up to $10 million in civil penalties for first violations and up to $30 million for subsequent violations, in addition to injunctive or declaratory relief. The Act does not create a private right of action.
Under New York Senate rules, the RAISE Act must be delivered to the Governor within 45 days from the date of passage – by July 27, 2025. Governor Hochul will then have 30 days to sign or veto the bill. If enacted, the RAISE Act would come into effect 90 days after it is signed into law.
* * *
We will continue to provide updates on meaningful developments related to artificial intelligence and technology across our Inside Global Tech, Global Policy Watch, and Inside Privacy blogs.