As artificial intelligence (AI) technologies continue to advance and states increasingly pass legislation to regulate AI development and use, Congress and the White House are proposing comprehensive nationwide laws.

New proposals from the White House Office of Science and Technology Policy (OSTP) and Senator Marsha Blackburn (R-TN) offer comprehensive approaches to centralizing AI regulation within the federal government and promoting U.S. AI leadership.

On March 20, the Trump Administration released its National Policy Framework for AI, with more than two dozen AI-related “legislative recommendations,” pursuant to President Trump’s December 2025 AI Preemption Executive Order and consistent with the White House’s July 2025 AI Action Plan.  Senator Blackburn separately announced her TRUMP AMERICA AI Act discussion draft, which she said would “codify” the President’s AI Preemption EO and “create one rulebook for [AI]” without the need to broadly preempt all state regulations.

I. White House Framework

Recommendations.  The White House proposal, prepared by OSTP and Special Advisor for AI and Crypto David Sacks, contains 27 recommendations to Congress.  The three-page document offers a “light touch” approach to AI regulation, focusing on protecting minors and communities from harmful impacts, protecting IP rights, defending free speech, promoting innovation, supporting workers, and preempting certain state AI laws.  The framework fulfills a key directive of the President’s AI Preemption Executive Order, which called for legislative recommendations to establish a “uniform Federal policy framework for AI.”

  • Child Safety.  The framework calls on Congress to require “AI services and platforms” to “take measures to protect children,” including by building upon the deepfake protections of 2025’s TAKE IT DOWN Act and establishing age-assurance requirements, parental controls, and safety features for AI platforms and services “likely to be accessed by minors.”
  • Harmful Impacts.  The framework recommends that Congress ensure that communities benefit from, and “are protected from harmful impacts” of, AI development by protecting ratepayers from increased energy costs, streamlining AI infrastructure permitting, supporting “existing law enforcement efforts” to combat AI-enabled fraud, and ensuring that agencies plan for frontier model-related national security concerns.
  • IP Rights.  While noting the Trump Administration’s view that “training of AI models on copyrighted material does not violate copyright laws,” the framework “acknowledges arguments to the contrary” and calls for “the judiciary’s resolution” of the issue, rather than new legislation.  The framework also calls on Congress to consider licensing frameworks and likeness protections for the unauthorized use of AI-generated digital replicas similar to the NO FAKES Act
  • Censorship & Free Speech.  To “defend free speech” and prevent AI from “being used to silence or censor lawful political expression or dissent,” the framework calls on Congress to prohibit the government from “coercing” AI providers to ban, compel, or alter content, and to provide means for Americans to seek redress for government efforts to censor or control information on AI platforms.
  • AI Innovation.  Echoing the President’s January 2025 AI Executive Order, the framework calls on Congress to “ensur[e] American AI dominance” by establishing AI regulatory sandboxes and making federal dataset resources accessible for AI training, similar to the National AI Research Resource.  The framework warns that Congress should refrain from creating “any new federal rulemaking body to regulate AI,” and should instead support “sector-specific AI applications through existing regulatory bodies” and “industry-led standards.”
  • Workforce.  The framework calls for various steps to ensure that U.S. workers “benefit from AI-driven growth,” including “non-regulatory methods” to provide AI training in existing programs and studies of AI-driven workforce trends.  

Preemption.  The framework recommends that Congress preempt state AI laws that “impose undue burdens,” including state AI laws that “govern areas better suited” to federal regulation or that are “contrary to the United States’ national strategy to achieve global AI dominance.”  Specifically, the framework calls for the preemption of state laws that (1) “regulate AI development,” which is “an inherently interstate phenomenon”; (2) “unduly burden Americans’ use of AI” for otherwise lawful activities; or (3) “penalize AI developers” for unlawful third-party conduct involving their AI models.  

Consistent with the President’s AI Preemption Executive Order, which noted that its “legislative recommendation” should not preempt state AI laws related to child safety, AI infrastructure, or state procurement and use (among “other topics as shall be determined”), the framework calls for Congress to “respect key principles of federalism” by not preempting (1) state “traditional police powers” to enforce “laws of general applicability” against AI developers and users, including safety for minors, consumer protection, and CSAM laws; (2) state zoning laws for AI infrastructure; or (3) requirements “governing a state’s own use of AI.”

Previous attempts to preempt state AI legislation have stalled.  Last year, the Senate voted 99–1 to remove a provision from the omnibus “Big Beautiful Bill,” drafted by Senator Ted Cruz (R-TX), that would have penalized states for enforcing AI-related regulations that go beyond federal rules.  Blackburn was among the senators leading the charge against this “moratorium,” after first attempting to negotiate with Cruz to dilute the proposal.  Blackburn had expressed concern that the policy would have prevented states from regulating AI to promote privacy and child safety.

II. TRUMP AMERICA AI Act

Senator Blackburn’s TRUMP AMERICA AI Act, a discussion draft of a 291-page omnibus bill, incorporates provisions from several existing proposals to regulate aspects of AI and promote AI development in the United States.  Notably, although her office’s press release describes the bill as “solv[ing] the patchwork of state laws,” the bill does not expressly preempt all state and local laws related to AI, and in several cases it explicitly authorizes states to enact more stringent regulations than those contained in the bill.  Still, despite its differences from the White House plan in many respects, the name and timing of TRUMP AMERICA AI, and language in Blackburn’s press release linking it to the White House framework, suggest that that she intends for the draft to play a role in Senate negotiations on legislative text to implement the White House framework.

The bill’s text includes, among other provisions, the NO FAKES Act and Kids Online Safety Act—two of Blackburn’s top legislative priorities, which, respectively, would restrict unauthorized development of an individual’s likeness and require online platforms to implement protections for underage users. 

Significantly, the bill would also effectively impose new obligations and liabilities on online platforms by reforming Section 230 of the Communications Decency Act to deny immunity to platforms in certain circumstances, imposing a duty of care on AI platforms to prevent and mitigate certain harms to users, subjecting certain platforms to political bias audits, and creating a private right of action to enforce specified standards for identifying the provenance of digital content. 

Two provisions in the bill illustrate the similarities to and differences from White House priorities: On the one hand, the bill would codify the Executive Order 14319, “Preventing Woke AI in the Federal Government”; on the other, it would specify that the use of copyrighted works in AI development does not constitute fair use under the Copyright Act, directly contradicting the National AI Policy Framework’s support for a judicial resolution.

The Blackburn bill also includes several measures designed to promote AI leadership in the United States, including authorizing AI testbeds and grand challenges, promoting standards development, and making permanent the National Artificial Intelligence Research Resource.  The draft would also require AI developers to regularly assess the risks of advanced AI and report their safety protocols to the Department of Homeland Security.

Prospects.  After years of debate and hundreds of AI-related bills introduced, Congress has thus far failed to pass substantial national standards for AI regulation.  As AI use continues to proliferate and both policymakers and the public learn more about its potential risks and benefits, states have pushed forward with their own regulations in the face of Congressional inaction. The President’s AI Preemption EO and the new legislative framework underscore the opportunity for federal legislation to harmonize AI rules nationwide.

Senator Blackburn’s proposal is unlikely to advance in the Senate as drafted, but it may influence efforts to translate the OSTP framework into legislative language.  In the meantime, with scarce legislative days left before the midterm elections and fewer legislative vehicles expected to move through Congress this year, states will continue to play an outsize role in regulating the rapid advancement of AI.

Photo of Holly Fechner Holly Fechner

Holly Fechner advises clients on complex public policy matters that combine legal and political opportunities and risks. She leads teams that represent companies, entities, and organizations in significant policy and regulatory matters before Congress and the Executive Branch.

She is a co-chair of…

Holly Fechner advises clients on complex public policy matters that combine legal and political opportunities and risks. She leads teams that represent companies, entities, and organizations in significant policy and regulatory matters before Congress and the Executive Branch.

She is a co-chair of the Covington’s Technology Industry Group and a member of the Covington Political Action Committee board of directors.

Holly works with clients to:

Develop compelling public policy strategies
Research law and draft legislation and policy
Draft testimony, comments, fact sheets, letters and other documents
Advocate before Congress and the Executive Branch
Form and manage coalitions
Develop communications strategies

She is the Executive Director of Invent Together and a visiting lecturer at the Harvard Kennedy School of Government. She serves on the board of directors of the American Constitution Society.

Holly served as Policy Director for Senator Edward M. Kennedy (D-MA) and Chief Labor and Pensions Counsel for the Senate Health, Education, Labor & Pensions Committee.

She received The American Lawyer, “Dealmaker of the Year” award in 2019. The Hill named her a “Top Lobbyist” from 2013 to the present, and she has been ranked by Chambers USA – America’s Leading Business Lawyers from 2012 to the present. One client noted to Chambers: “Holly is an exceptional attorney who excels in government relations and policy discussions. She has an incisive analytical skill set which gives her the capability of understanding extremely complex legal and institutional matters.” According to another client surveyed by Chambers, “Holly is incredibly intelligent, effective and responsive. She also leads the team in a way that brings out everyone’s best work.”

Photo of Matthew Shapanka Matthew Shapanka

Matthew Shapanka practices at the intersection of law, policy, and politics, developing strategies to guide businesses facing complex legislative, regulatory, and investigative matters. Matt draws on more than 15 years of experience across Capitol Hill, private practice, state government, and political campaigns to…

Matthew Shapanka practices at the intersection of law, policy, and politics, developing strategies to guide businesses facing complex legislative, regulatory, and investigative matters. Matt draws on more than 15 years of experience across Capitol Hill, private practice, state government, and political campaigns to advise clients on leading-edge policy issues involving artificial intelligence, semiconductors, connected and autonomous vehicles, and other critical and emerging technologies.

Matt works with clients to develop and execute complex public policy initiatives that involve legal, political, and reputational risks. He regularly assists clients to:

Develop public policy strategies
Draft federal and state legislation and regulations
Analyze legislation, regulations, and other government initiatives
Craft testimony, regulatory comments, fact sheets, letters and other advocacy materials
Prepare company executives and other witnesses to testify before Congress, state legislatures, and regulatory bodies
Represent clients before Congress, the White House, federal agencies, state legislatures, and state regulatory agencies
Build and manage policy advocacy coalitions

He advises clients across multiple policy areas, including matters involving regulation of critical and emerging technologies like artificial intelligence, connected and autonomous vehicles, and semiconductors; national security; intellectual property; antitrust; financial services technologies (“fintech”); food and beverage regulation; COVID-19 pandemic response and recovery; and election administration and campaign finance.

Matt rejoined Covington after serving as Chief Counsel for the U.S. Senate Committee on Rules and Administration, where he advised Chairwoman Amy Klobuchar (D-MN) on all legal, policy, and oversight matters before the Committee. Most significantly, Matt staffed the Committee in passing the Electoral Count Reform Act – a landmark bipartisan law that updates the procedures for certifying and counting votes in presidential elections—and the Committee’s bipartisan joint investigation (with the Homeland Security Committee) into the security planning and response to the January 6, 2021 attack on the Capitol.

Both in Congress and at Covington, Matt has prepared dozens of corporate and nonprofit executives, academics, government officials, and presidential nominees for testimony at congressional committee hearings and depositions. He is a skilled legislative drafter who has composed dozens of bills and amendments introduced in Congress and state legislatures, including several that have been enacted into law across multiple policy areas. Matt also leads the firm’s state policy practice, advising clients on complex multistate legislative and regulatory matters and managing state-level advocacy efforts.

In addition to his policy work, as a member of Covington’s nationally recognized (Chambers Band 1) Election and Political Law Practice Group, Matt advises and represents clients on the full range of political law compliance and enforcement matters, including:

Federal election, campaign finance, lobbying, and government ethics laws
The Securities and Exchange Commission’s “Pay-to-Play” rule
Election and political laws of states and municipalities across the country

Before law school, Matt served in the administration of former Governor Deval Patrick (D-MA), where he worked on policy, communications, and compliance matters for federal economic recovery funding awarded to the state. He has also staffed federal, state, and local political candidates in Massachusetts and New Hampshire.

Photo of August Gweon August Gweon

August Gweon counsels national and multinational companies on new regulatory frameworks governing artificial intelligence, robotics, and other emerging technologies, digital services, and digital infrastructure. August leverages his AI and technology policy experiences to help clients understand AI industry developments, emerging risks, and policy…

August Gweon counsels national and multinational companies on new regulatory frameworks governing artificial intelligence, robotics, and other emerging technologies, digital services, and digital infrastructure. August leverages his AI and technology policy experiences to help clients understand AI industry developments, emerging risks, and policy and enforcement trends. He regularly advises clients on AI governance, risk management, and compliance under data privacy, consumer protection, safety, procurement, and platform laws.

August’s practice includes providing comprehensive advice on U.S. state and federal AI policies and legislation, including the Colorado AI Act and state laws regulating automated decision-making technologies, AI-generated content, generative AI systems and chatbots, and foundation models. He also assists clients in assessing risks and compliance under federal and state privacy laws like the California Privacy Rights Act, responding to government inquiries and investigations, and engaging in AI public policy advocacy and rulemaking.

Photo of Samuel Klein Samuel Klein

Samuel Klein helps clients realize their policy objectives, manage reputational risks, and navigate the regulatory environment governing political engagement.

As a member of Covington’s Election and Political Law practice, Sam assists clients facing Congressional investigations and offers guidance on ethics laws; with the…

Samuel Klein helps clients realize their policy objectives, manage reputational risks, and navigate the regulatory environment governing political engagement.

As a member of Covington’s Election and Political Law practice, Sam assists clients facing Congressional investigations and offers guidance on ethics laws; with the firm’s Public Policy group, Sam supports strategic advocacy across a breadth of policy domains at the federal, state, and local levels.

Sam spent one year as a law clerk at the Federal Election Commission. His prior experience includes serving as an intern to two senior members of Congress and helping clients communicate nuanced policy concepts to lawmakers and stakeholders as a public-affairs consultant.