On October 28, Texas State Representative Giovanni Capriglione (R-Tarrant County) released a draft of the Texas Responsible AI Governance Act (“TRAIGA”), after nearly a year collecting input from industry stakeholders.  Representative Capriglione, who authored Texas’s Data Privacy and Security Act (discussed here) and currently co-chairs the state’s AI Advisory Council, appears likely to introduce TRAIGA in the upcoming legislative session scheduled to begin on January 14, 2025.  Modeled after the Colorado AI Act (SB 205) (discussed here) and the EU AI Act, TRAIGA would establish obligations for developers, deployers, and distributors of “high-risk AI systems.”  Additionally, TRAIGA would establish an “AI Regulatory Sandbox Program” for participating AI developers to test AI systems under a statutory exemption.

Although a number of states have expressed significant interest in AI regulation, if passed, Texas would become the second state to enact industry-agnostic, risk-based AI legislation, following the passage of the Colorado AI Act in May.  There is significant activity in other states as well, as the California Privacy Protection Agency considers rules that would apply to certain automated decision and AI systems, and other states are expected to introduce AI legislation in the new session.  In addition to its requirements for high-risk AI and its AI sandbox program, TRAIGA would amend Texas’s Data Privacy and Security Act to incorporate AI-specific provisions and would provide for an AI workforce grant program and a new “AI Council” to provide advisory opinions and guidance on AI.

Despite these similarities, however, a number of provisions in the 41-page draft of TRAIGA would differ from the Colorado AI Act:

Lower Thresholds for “High-Risk AI.”  Although TRAIGA takes a risk-based approach to regulation by focusing requirements on AI systems that present heightened risks to individuals, the scope of TRAIGA’s high-risk AI systems would be arguably broader than the Colorado AI Act.  First, TRAIGA would apply to systems that are a “contributing factor” in consequential decisions, not those that only constitute a “substantial factor” in consequential decisions, as contemplated by the Colorado AI Act.  Additionally, TRAIGA would define “consequential decision” more broadly than the Colorado AI Act, to include decisions that affect consumers’ access to, cost of, or terms of, for example, transportation services, criminal case assessments, and electricity services.

New Requirements for Distributors and Other Entities.  TRAIGA would build upon the Colorado AI Act’s approach to regulating key actors in the AI supply chain.  It would also add a new role for AI “distributors,” defined as persons, other than developers, that make an AI system “available in the market.”  Distributors would have a duty to use reasonable care to prevent algorithmic discrimination, including a duty to withdraw, disable, or recall non-compliant high-risk AI systems, as appropriate. 

Ban on “Unacceptable Risk” AI Systems.  Similar to the EU AI Act, TRAIGA would prohibit the development or deployment of certain AI systems that pose unacceptable risks, including AI systems used to manipulate human behavior, engage in social scoring, capture biometric identifiers of an individual, infer or interpret sensitive personal attributes, infer (or that have the capability to infer) emotions without consent, or produce deepfakes that constitute CSAM or intimate imagery prohibited under Texas law. 

New Generative AI Training Data Record-Keeping Requirement.  TRAIGA contains requirements specific to developers of generative AI systems, who would be required to keep “detailed records” of generative AI training datasets, following suggested actions in NIST’s AI Risk Management Framework Generative AI Profile, previously covered here.

Expanded Reporting for Deployers; No Reporting for Developers.  TRAIGA would impose reporting requirements for AI system deployers—defined as persons that “put into effect or commercialize” high-risk AI systems—that go beyond those in the Colorado AI Act.  TRAIGA would require deployers to provide written notice to the Texas AG, relevant regulatory authorities, or TRAIGA’s newly-established AI Council, as well as “affected consumers,” where the deployer becomes aware or is made aware that a deployed high-risk AI system has caused or is likely to result in algorithmic discrimination or any “inappropriate or discriminatory consequential decision.”  Unlike the Colorado AI Act, however, TRAIGA would not impose reporting requirements for developers.

Exemptions.  TRAIGA would recognize exemptions for (1) research, training, testing, and other pre-deployment activities within the scope of its sandbox program (unless such activities constitute prohibited uses), (2) small business, as defined by the U.S. Small Business Administration and certain other requirements, and (3) developers of open-source AI systems so long as the developer takes steps to prevent high-risk uses and makes the “weights and technical architecture” of the AI system publicly available. 

Enforcement.  TRAIGA would authorize the Texas AG to enforce its developer, deployer, and distributor high-risk AI requirements and recover injunctive relief and civil penalties, subject to a 30-day cure period.  Additionally, TRAIGA would provide a limited private right of action for injunctive and declaratory relief against entities that develop or deploy AI for prohibited uses.  

*              *              *

TRAIGA’s prospect for passage is far from certain.  As in other states, including Colorado, the draft text may be substantially amended through the legislative process.  Nonetheless, if enacted, TRAIGA would firmly establish a risk-based, consumer protection-focused framework as a national model for AI regulation in the United States.  We will be closely monitoring TRAIGA and other state AI developments as the 2025 state legislative sessions unfold.  

Follow our Global Policy WatchInside Global Tech, and Inside Privacy blogs for ongoing updates on key AI and other technology legislative and regulatory developments.

Photo of Matthew Shapanka Matthew Shapanka

Matthew Shapanka practices at the intersection of law, policy, and politics. He advises clients before Congress, state legislatures, and government agencies, helping businesses to navigate complex legislative, regulatory, and investigations matters, mitigate their legal, political, and reputational risks, and capture business opportunities.

Drawing…

Matthew Shapanka practices at the intersection of law, policy, and politics. He advises clients before Congress, state legislatures, and government agencies, helping businesses to navigate complex legislative, regulatory, and investigations matters, mitigate their legal, political, and reputational risks, and capture business opportunities.

Drawing on more than 15 years of experience on Capitol Hill and in private practice, state government, and political campaigns, Matt develops and executes complex, multifaceted public policy initiatives for clients seeking actions by Congress, state legislatures, and federal and state government agencies. He regularly counsels and represents businesses in legislative and regulatory matters involving intellectual property, national security, regulation of critical and emerging technologies like artificial intelligence, connected and autonomous vehicles, and other tech policy issues. He also represents clients facing congressional investigations or inquiries across a range of committees and subject matters.

Matt rejoined Covington after serving as Chief Counsel for the U.S. Senate Committee on Rules and Administration, where he advised Chairwoman Amy Klobuchar (D-MN) on all legal, policy, and oversight matters before the Committee, particularly federal election and campaign finance law, Federal Election Commission nominations, and oversight of the legislative branch. Most significantly, Matt led the Committee’s staff work on the Electoral Count Reform Act – a landmark bipartisan law that updates the procedures for certifying and counting votes in presidential elections—and the Committee’s bipartisan joint investigation (with the Homeland Security Committee) into the security planning and response to the January 6th attack.

Both in Congress and at Covington, Matt has prepared dozens of corporate and nonprofit executives, academics, government officials, and presidential nominees for testimony at congressional committee hearings and depositions. He is a skilled legislative drafter who has composed dozens of bills and amendments introduced in Congress and state legislatures, including several that have been enacted into law across multiple policy areas. Matt also leads the firm’s state policy practice, advising clients on complex multistate legislative and regulatory matters and managing state-level advocacy efforts.

In addition to his policy work, Matt advises and represents clients on the full range of political law compliance and enforcement matters involving federal election, campaign finance, lobbying, and government ethics laws, the Securities and Exchange Commission’s “Pay-to-Play” rule, and the election and political laws of states and municipalities across the country.

Before law school, Matt served in the administration of former Governor Deval Patrick (D-MA) as a research analyst in the Massachusetts Recovery & Reinvestment Office, where he worked on policy, communications, and compliance matters for federal economic recovery funding awarded to the state. He has also staffed federal, state, and local political candidates in Massachusetts and New Hampshire.

Photo of Jayne Ponder Jayne Ponder

Jayne Ponder provides strategic advice to national and multinational companies across industries on existing and emerging data privacy, cybersecurity, and artificial intelligence laws and regulations.

Jayne’s practice focuses on helping clients launch and improve products and services that involve laws governing data privacy…

Jayne Ponder provides strategic advice to national and multinational companies across industries on existing and emerging data privacy, cybersecurity, and artificial intelligence laws and regulations.

Jayne’s practice focuses on helping clients launch and improve products and services that involve laws governing data privacy, artificial intelligence, sensitive data and biometrics, marketing and online advertising, connected devices, and social media. For example, Jayne regularly advises clients on the California Consumer Privacy Act, Colorado AI Act, and the developing patchwork of U.S. state data privacy and artificial intelligence laws. She advises clients on drafting consumer notices, designing consent flows and consumer choices, drafting and negotiating commercial terms, building consumer rights processes, and undertaking data protection impact assessments. In addition, she routinely partners with clients on the development of risk-based privacy and artificial intelligence governance programs that reflect the dynamic regulatory environment and incorporate practical mitigation measures.

Jayne routinely represents clients in enforcement actions brought by the Federal Trade Commission and state attorneys general, particularly in areas related to data privacy, artificial intelligence, advertising, and cybersecurity. Additionally, she helps clients to advance advocacy in rulemaking processes led by federal and state regulators on data privacy, cybersecurity, and artificial intelligence topics.

As part of her practice, Jayne also advises companies on cybersecurity incident preparedness and response, including by drafting, revising, and testing incident response plans, conducting cybersecurity gap assessments, engaging vendors, and analyzing obligations under breach notification laws following an incident.

Jayne maintains an active pro bono practice, including assisting small and nonprofit entities with data privacy topics and elder estate planning.

Photo of August Gweon August Gweon

August Gweon counsels national and multinational companies on data privacy, cybersecurity, antitrust, and technology policy issues, including issues related to artificial intelligence and other emerging technologies. August leverages his experiences in AI and technology policy to help clients understand complex technology developments, risks…

August Gweon counsels national and multinational companies on data privacy, cybersecurity, antitrust, and technology policy issues, including issues related to artificial intelligence and other emerging technologies. August leverages his experiences in AI and technology policy to help clients understand complex technology developments, risks, and policy trends.

August regularly provides advice to clients on privacy and competition frameworks and AI regulations, with an increasing focus on U.S. state AI legislative developments and trends related to synthetic content, automated decision-making, and generative AI. He also assists clients in assessing federal and state privacy regulations like the California Privacy Rights Act, responding to government inquiries and investigations, and engaging in public policy discussions and rulemaking processes.