In the absence of congressional action on comprehensive artificial intelligence (AI) legislation, state legislatures are forging ahead with groundbreaking bills to regulate the rapidly advancing technology. On May 8, the Colorado House of Representatives passed SB 205, a far-reaching and comprehensive AI bill, on a 41-22-2 vote. The final vote comes just days after the state Senate’s passage of the bill on May 3, making Colorado the first state in the nation to send comprehensive AI legislation to its governor for signing. While Governor Jared Polis (D) has not indicated whether he will sign or veto the bill, if SB 205 becomes law, it would establish a broad regulatory regime for developers and deployers of “high-risk” AI systems.
High-risk AI systems, as defined by the bill, are AI systems that make, or play a substantial part in making, consequential decisions that affect consumers. SB 205’s duties and requirements would aim to minimize risks of algorithmic discrimination, or differential treatment or impacts that disfavor individuals or groups based on protected classifications, resulting from the use of high-risk AI systems.
Algorithmic Discrimination Duty of Care. SB 205 would impose a duty of reasonable care on developers and deployers of high-risk AI to protect consumers from algorithmic discrimination. The bill, which would be exclusively enforced by the Colorado Attorney General, would also establish a rebuttable presumption that high-risk AI developers and deployers meet this duty to use reasonable care if they comply with the bill’s requirements.
AI Interaction Notices & Public Disclosures. SB 205 would require entities that deploy, sell, or otherwise make available an AI system that is “intended to interact with consumers” to disclose to consumers that they are interacting with an AI system, unless obvious to a reasonable person. The bill would also require all AI developers and deployers to issue public statements disclosing the types of high-risk AI systems they develop, modify, or deploy and how they manage algorithmic discrimination risks, with updates within 90 days after modifying any high-risk AI.
High-Risk AI Developer Requirements. High-risk AI developers would be required to disclose to deployers information related to harmful or inappropriate uses, training data and data governance measures, performance evaluations, algorithmic discrimination safeguards, and other aspects of high-risk AI systems, along with any other information required to conduct impact assessments or monitor a high-risk AI system’s performance for risks of algorithmic discrimination. High-risk AI developers would also be required to disclose, to the Colorado Attorney General and all known deployers and developers of a high-risk AI system, any known or foreseeable risk of algorithmic discrimination arising from the high-risk AI system’s intended uses within 90 days after discovering that such algorithmic discrimination occurred.
High-Risk AI Deployer Requirements. SB 205 would require high-risk AI deployers to implement a “risk management policy and program” for mitigating algorithmic discrimination, which must be regularly updated over a high-risk AI system’s life cycle and must be reasonable considering the National Institute of Standards and Technology (NIST)’s AI Risk Management Framework or equivalent risk management frameworks. High-risk AI deployers would also be required to conduct algorithmic discrimination impact assessments for each high-risk AI system in deployment and at least 90 days after such AI systems are substantially modified.
Additionally, high-risk AI deployers would be required to notify consumers of the use of high-risk AI for consequential decisions that affect them, provide consumers with statements disclosing the high-risk AI system’s purposes, data, and components, and provide information regarding consumers’ rights to opt out of profiling for decisions with legal or similarly significant effects under the Colorado Privacy Act. High-risk AI deployers would also be required to provide consumers with opportunities to (1) correct any incorrect personal data processed by the high-risk AI system and (2) appeal adverse consequential decisions arising from the use of a high-risk AI system, which must allow for human review if technically feasible. Finally, high-risk AI deployers would also be obligated to disclose incidents of algorithmic discrimination to the Colorado Attorney General within 90 days of discovering the incident.
Comprehensive AI Bills in Perspective. Colorado’s passage of SB 205 coincides with votes to advance comprehensive AI bills in two separate California legislative committees. On April 23, the California Assembly Judiciary Committee voted 9-2 to pass AB 2930, a comprehensive AI bill that would regulate the use of automated decision tools. Mirroring SB 205’s requirements for high-risk AI systems, AB 2930 would impose impact assessment, notice, and disclosure requirements on developers and deployers to mitigate algorithmic discrimination risks. Also on April 23, the California Senate Government Organization Committee voted 11-0 to pass the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047), followed by the Senate Appropriations Committee’s 7-2 vote in favor of that bill on May 6. While Colorado’s SB 205 and California’s AB 2930 would regulate AI systems based on their use in consequential decision making and address risks of algorithmic discrimination, SB 1047 would regulate AI systems based on their technical capabilities and address risks to public safety. We are closely monitoring these and related state AI developments as they unfold. A more detailed summary of California SB 1047 is available here, a summary of key themes in other recent state AI bills is available here, and our overview of recent state synthetic media and generative AI legislation is available here. Follow our Global Policy Watch, Inside Global Tech, and Inside Privacy blogs for ongoing updates on key AI and other technology legislative and regulatory developments.