As artificial intelligence (AI) technologies continue to advance and states increasingly pass legislation to regulate AI development and use, Congress and the White House are proposing comprehensive nationwide laws.
New proposals from the White House Office of Science and Technology Policy (OSTP) and Senator Marsha Blackburn (R-TN) offer comprehensive approaches to centralizing AI regulation within the federal government and promoting U.S. AI leadership.
On March 20, the Trump Administration released its National Policy Framework for AI, with more than two dozen AI-related “legislative recommendations,” pursuant to President Trump’s December 2025 AI Preemption Executive Order and consistent with the White House’s July 2025 AI Action Plan. Senator Blackburn separately announced her TRUMP AMERICA AI Act discussion draft, which she said would “codify” the President’s AI Preemption EO and “create one rulebook for [AI]” without the need to broadly preempt all state regulations.
I. White House Framework
Recommendations. The White House proposal, prepared by OSTP and Special Advisor for AI and Crypto David Sacks, contains 27 recommendations to Congress. The three-page document offers a “light touch” approach to AI regulation, focusing on protecting minors and communities from harmful impacts, protecting IP rights, defending free speech, promoting innovation, supporting workers, and preempting certain state AI laws. The framework fulfills a key directive of the President’s AI Preemption Executive Order, which called for legislative recommendations to establish a “uniform Federal policy framework for AI.”
- Child Safety. The framework calls on Congress to require “AI services and platforms” to “take measures to protect children,” including by building upon the deepfake protections of 2025’s TAKE IT DOWN Act and establishing age-assurance requirements, parental controls, and safety features for AI platforms and services “likely to be accessed by minors.”
- Harmful Impacts. The framework recommends that Congress ensure that communities benefit from, and “are protected from harmful impacts” of, AI development by protecting ratepayers from increased energy costs, streamlining AI infrastructure permitting, supporting “existing law enforcement efforts” to combat AI-enabled fraud, and ensuring that agencies plan for frontier model-related national security concerns.
- IP Rights. While noting the Trump Administration’s view that “training of AI models on copyrighted material does not violate copyright laws,” the framework “acknowledges arguments to the contrary” and calls for “the judiciary’s resolution” of the issue, rather than new legislation. The framework also calls on Congress to consider licensing frameworks and likeness protections for the unauthorized use of AI-generated digital replicas similar to the NO FAKES Act.
- Censorship & Free Speech. To “defend free speech” and prevent AI from “being used to silence or censor lawful political expression or dissent,” the framework calls on Congress to prohibit the government from “coercing” AI providers to ban, compel, or alter content, and to provide means for Americans to seek redress for government efforts to censor or control information on AI platforms.
- AI Innovation. Echoing the President’s January 2025 AI Executive Order, the framework calls on Congress to “ensur[e] American AI dominance” by establishing AI regulatory sandboxes and making federal dataset resources accessible for AI training, similar to the National AI Research Resource. The framework warns that Congress should refrain from creating “any new federal rulemaking body to regulate AI,” and should instead support “sector-specific AI applications through existing regulatory bodies” and “industry-led standards.”
- Workforce. The framework calls for various steps to ensure that U.S. workers “benefit from AI-driven growth,” including “non-regulatory methods” to provide AI training in existing programs and studies of AI-driven workforce trends.
Preemption. The framework recommends that Congress preempt state AI laws that “impose undue burdens,” including state AI laws that “govern areas better suited” to federal regulation or that are “contrary to the United States’ national strategy to achieve global AI dominance.” Specifically, the framework calls for the preemption of state laws that (1) “regulate AI development,” which is “an inherently interstate phenomenon”; (2) “unduly burden Americans’ use of AI” for otherwise lawful activities; or (3) “penalize AI developers” for unlawful third-party conduct involving their AI models.
Consistent with the President’s AI Preemption Executive Order, which noted that its “legislative recommendation” should not preempt state AI laws related to child safety, AI infrastructure, or state procurement and use (among “other topics as shall be determined”), the framework calls for Congress to “respect key principles of federalism” by not preempting (1) state “traditional police powers” to enforce “laws of general applicability” against AI developers and users, including safety for minors, consumer protection, and CSAM laws; (2) state zoning laws for AI infrastructure; or (3) requirements “governing a state’s own use of AI.”
Previous attempts to preempt state AI legislation have stalled. Last year, the Senate voted 99–1 to remove a provision from the omnibus “Big Beautiful Bill,” drafted by Senator Ted Cruz (R-TX), that would have penalized states for enforcing AI-related regulations that go beyond federal rules. Blackburn was among the senators leading the charge against this “moratorium,” after first attempting to negotiate with Cruz to dilute the proposal. Blackburn had expressed concern that the policy would have prevented states from regulating AI to promote privacy and child safety.
II. TRUMP AMERICA AI Act
Senator Blackburn’s TRUMP AMERICA AI Act, a discussion draft of a 291-page omnibus bill, incorporates provisions from several existing proposals to regulate aspects of AI and promote AI development in the United States. Notably, although her office’s press release describes the bill as “solv[ing] the patchwork of state laws,” the bill does not expressly preempt all state and local laws related to AI, and in several cases it explicitly authorizes states to enact more stringent regulations than those contained in the bill. Still, despite its differences from the White House plan in many respects, the name and timing of TRUMP AMERICA AI, and language in Blackburn’s press release linking it to the White House framework, suggest that that she intends for the draft to play a role in Senate negotiations on legislative text to implement the White House framework.
The bill’s text includes, among other provisions, the NO FAKES Act and Kids Online Safety Act—two of Blackburn’s top legislative priorities, which, respectively, would restrict unauthorized development of an individual’s likeness and require online platforms to implement protections for underage users.
Significantly, the bill would also effectively impose new obligations and liabilities on online platforms by reforming Section 230 of the Communications Decency Act to deny immunity to platforms in certain circumstances, imposing a duty of care on AI platforms to prevent and mitigate certain harms to users, subjecting certain platforms to political bias audits, and creating a private right of action to enforce specified standards for identifying the provenance of digital content.
Two provisions in the bill illustrate the similarities to and differences from White House priorities: On the one hand, the bill would codify the Executive Order 14319, “Preventing Woke AI in the Federal Government”; on the other, it would specify that the use of copyrighted works in AI development does not constitute fair use under the Copyright Act, directly contradicting the National AI Policy Framework’s support for a judicial resolution.
The Blackburn bill also includes several measures designed to promote AI leadership in the United States, including authorizing AI testbeds and grand challenges, promoting standards development, and making permanent the National Artificial Intelligence Research Resource. The draft would also require AI developers to regularly assess the risks of advanced AI and report their safety protocols to the Department of Homeland Security.
Prospects. After years of debate and hundreds of AI-related bills introduced, Congress has thus far failed to pass substantial national standards for AI regulation. As AI use continues to proliferate and both policymakers and the public learn more about its potential risks and benefits, states have pushed forward with their own regulations in the face of Congressional inaction. The President’s AI Preemption EO and the new legislative framework underscore the opportunity for federal legislation to harmonize AI rules nationwide.
Senator Blackburn’s proposal is unlikely to advance in the Senate as drafted, but it may influence efforts to translate the OSTP framework into legislative language. In the meantime, with scarce legislative days left before the midterm elections and fewer legislative vehicles expected to move through Congress this year, states will continue to play an outsize role in regulating the rapid advancement of AI.