This is part of an ongoing series of Covington blogs on the AI policies, executive orders, and other actions of the Trump Administration.  This blog describes AI actions taken by the Trump Administration in March 2025, and prior articles in this series are available here.

White House Receives Public Comments on AI Action Plan

On March 15, the White House Office of Science & Technology Policy and the Networking and Information Technology Research and Development National Coordination Office within the National Science Foundation closed the comment period for public input on the White House’s AI Action Plan, following their issuance of a Request for Information (“RFI”) on the AI Action Plan on February 6.  As required by President Trump’s AI EO, the RFI called on stakeholders to submit comments on the highest priority policy actions that should be in the new AI Action Plan, centered around 20 broad and non-exclusive topics for potential input, including data centers, data privacy and security, technical and safety standards, intellectual property, and procurement, to inform an AI Action Plan to achieve the AI EO’s policy of “sustain[ing] and enhance[ing] America’s global AI dominance.”

The RFI resulted in 8,755 submitted comments, including submissions from nonprofit organizations, think tanks, trade associations, industry groups, academia, and AI companies.  The final AI Action Plan is expected by July of 2025.

NIST Launches New AI Standards Initiatives

The National Institute of Standards & Technology (“NIST”) announced several AI initiatives in March to advance AI research and the development of AI standards.  On March 19, NIST launched its GenAI Image Challenge, an initiative to evaluate generative AI “image generators” and “image discriminators,” i.e., AI models designed to detect if images are AI-generated.  NIST called on academia and industry research labs to participate in the challenge by submitting generators and discriminators to NIST’s GenAI platform.

On March 24, NIST released its final report on Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations, NIST AI 100-2e2025, with voluntary guidance for securing AI systems against adversarial manipulations and attacks.  Noting that adversarial attacks on AI systems “have been demonstrated under real-world conditions, and their sophistication and impacts have been increasing steadily,” the report provides a taxonomy of AI system attacks on predictive and generative AI systems at various stages of the “machine learning lifecycle.” 

On March 25, NIST announced the launch of an “AI Standards Zero Drafts project” that will pilot a new process for creating AI standards.  The new standards process will involve the creation of preliminary “zero drafts” of AI standards drafted by NIST and informed by rounds of stakeholder input, which will be submitted to standards developing organizations (“SDOs”) for formal standardization.  NIST outlined four AI topics for the pilot of the Zero Drafts project: (1) AI transparency and documentation about AI systems and data; (2) methods and metrics for AI testing, evaluation, verification, and validation (“TEVV”); (3) concepts and terminology for AI system designs, architectures, processes, and actors; and (4) technical measures for reducing synthetic content risks.  NIST called for stakeholder input on the topics, scope, and priorities of the Zero Drafts process, with no set deadline for submitting responses.

Michael Kratsios Confirmed as Director of the Office of Science & Technology Policy

On March 25, the U.S. Senate voted 74-25 to confirm Michael Kratsios, the Assistant to the President for Science & Technology, as the Director of the White House Office of Science & Technology Policy.  As the U.S. Chief Technology Officer and OSTP Associate Director under the first Trump Administration, Kratsios played a significant role in shaping U.S. AI policy, including overseeing the establishment of the White House National AI Initiative Office and OMB guidance on the use of AI by federal agencies finalized in November 2020.  In his February 25 written responses to the Senate Commerce Committee, Kratsios stated that he would “seek to develop additional technical standards for the development and deployment of AI systems” through a “use-case and sector-specific, risk-based policy approach,” and would work with the Department of Commerce to assess the U.S. AI Safety Institute and “chart the best part forward for the institute to ensure continued American leadership” in AI.

On March 26, President Trump published a letter that he sent to Director Kratsios, directing Kratsios to meet three “challenges” to “deliver for the American people”:  (1) securing U.S. “technological supremacy” over potential adversaries “in critical and emerging technologies,” including AI, by accelerating research and development and removing regulatory barriers; (2) revitalizing the U.S. “science and technology enterprise” by reducing regulations, attracting talent, “empowering researchers,” and “protect[ing] our intellectual edge”; and (3) ensuring that “scientific progress and technological innovation fuel economic growth and better the lives of all Americans.”

Congress and States Continue to Respond to DeepSeek

The reaction to the rise of DeepSeek, including its implications for the U.S.-China AI competition, continued in March.  Members of Congress and state officials stepped up calls for bans on the use of DeepSeek’s AI models on government devices.  On March 3, Representatives Josh Gottheimer (D-NJ) and Darin LaHood (R-IL) announced that they had sent letters to the governors of 47 states and the mayor of the District of Columbia urging them to “take immediate action” to ban DeepSeek from government-issued devices.  The letters, which warn of “serious concerns” regarding DeepSeek’s data privacy and national security risks, follows Reps. Gottheimer and LaHood’s introduction of the No DeepSeek on Government Devices Act (H.R. 1121) in February.  On March 6, Montana Attorney General Austin Knudsen issued a letter, signed by Knudsen and 20 other state attorneys general, urging Congress to pass the No DeepSeek on Government Devices Act. 

States continued to pursue their own government use bans, following bans issued by officials in New York, Virginia, Iowa, and Pennsylvania last month.  On March 4, South Dakota Governor Larry Rhoden and the South Dakota Bureau of Information & Telecommunications issued a ban on the use of DeepSeek’s AI application by state employees, agencies, or government contractors on state government-issued or leased devices.  On March 21, Oklahoma Governor Kevin Stitt (R) announced a ban on downloading or accessing DeepSeek’s AI models on state-owned devices, or inputting “state data” into “any product using DeepSeek.”  In his announcement of the ban, Governor Stitt cited security risks, regulatory compliance issues, adversarial manipulation risks, and DeepSeek’s lack of robust security safeguards as reasons for the ban.

Photo of Nooree Lee Nooree Lee

Nooree Lee represents government contractors in all aspects of the procurement process and focuses his practice on the regulatory aspects of M&A activity, procurements involving emerging technologies, and international contracting matters.

Nooree advises government contractors and financial investors regarding the regulatory aspects of…

Nooree Lee represents government contractors in all aspects of the procurement process and focuses his practice on the regulatory aspects of M&A activity, procurements involving emerging technologies, and international contracting matters.

Nooree advises government contractors and financial investors regarding the regulatory aspects of corporate transactions and restructurings. His experience includes preparing businesses for sale, negotiating deal documents, coordinating large-scale diligence processes, and navigating pre- and post-closing regulatory approvals and integration. He has advised on 35+ M&A deals involving government contractors totaling over $30 billion in combined value. This includes Veritas Capital’s acquisition of Cubic Corp. for $2.8 billion; the acquisition of Perspecta Inc. by Veritas Capital portfolio company Peraton for $7.1 billion; and Cameco Corporation’s strategic partnership with Brookfield Renewable Partners to acquire Westinghouse Electric Company for $7.8+ billion.

Nooree also counsels clients focused on delivering emerging technologies to public sector customers. Over the past several years, his practice has expanded to include advising on the intersection of government procurement and artificial intelligence. Nooree counsels clients on the negotiation of AI-focused procurement and non-procurement agreements with the U.S. government and the rollout of federal and state-level regulations impacting the procurement and deployment of AI solutions on behalf of government agencies.

Nooree also counsels clients navigating the Foreign Military Sales (FMS) program and Foreign Military Financing (FMF) arrangements. Nooree has advised both U.S. and ex-U.S. companies in connection with defense sales to numerous foreign defense ministries, including those of Australia, Israel, Singapore, South Korea, and Taiwan.

Nooree maintains an active pro bono practice focusing on appeals of denied industrial security clearance applications and public housing and housing discrimination matters. In addition to his work within the firm, Nooree is an active member of the American Bar Association’s Section of Public Contract Law and has served on the Section Council and the Section’s Diversity Committee. He also served as the firm’s Fellow for the Leadership Council on Legal Diversity program in 2023.

Photo of Robert Huffman Robert Huffman

Bob Huffman counsels government contractors on emerging technology issues, including artificial intelligence (AI), cybersecurity, and software supply chain security, that are currently affecting federal and state procurement. His areas of expertise include the Department of Defense (DOD) and other agency acquisition regulations governing…

Bob Huffman counsels government contractors on emerging technology issues, including artificial intelligence (AI), cybersecurity, and software supply chain security, that are currently affecting federal and state procurement. His areas of expertise include the Department of Defense (DOD) and other agency acquisition regulations governing information security and the reporting of cyber incidents, the Cybersecurity Maturity Model Certification (CMMC) program, the requirements for secure software development self-attestations and bills of materials (SBOMs) emanating from the May 2021 Executive Order on Cybersecurity, and the various requirements for responsible AI procurement, safety, and testing currently being implemented under the October 2023 AI Executive Order. 

Bob also represents contractors in False Claims Act (FCA) litigation and investigations involving cybersecurity and other technology compliance issues, as well more traditional government contracting costs, quality, and regulatory compliance issues. These investigations include significant parallel civil/criminal proceedings growing out of the Department of Justice’s Cyber Fraud Initiative. They also include investigations resulting from False Claims Act qui tam lawsuits and other enforcement proceedings. Bob has represented clients in over a dozen FCA qui tam suits.

Bob also regularly counsels clients on government contracting supply chain compliance issues, including those arising under the Buy American Act/Trade Agreements Act and Section 889 of the FY2019 National Defense Authorization Act. In addition, Bob advises government contractors on rules relating to IP, including government patent rights, technical data rights, rights in computer software, and the rules applicable to IP in the acquisition of commercial products, services, and software. He focuses this aspect of his practice on the overlap of these traditional government contracts IP rules with the IP issues associated with the acquisition of AI services and the data needed to train the large learning models on which those services are based. 

Bob is ranked by Chambers USA for his work in government contracts and he writes extensively in the areas of procurement-related AI, cybersecurity, software security, and supply chain regulation. He also teaches a course at Georgetown Law School that focuses on the technology, supply chain, and national security issues associated with energy and climate change.

Photo of Ryan Burnette Ryan Burnette

Ryan Burnette is a government contracts and technology-focused lawyer that advises on federal contracting compliance requirements and on government and internal investigations that stem from these obligations. Ryan has particular experience with defense and intelligence contracting, as well as with cybersecurity, supply chain…

Ryan Burnette is a government contracts and technology-focused lawyer that advises on federal contracting compliance requirements and on government and internal investigations that stem from these obligations. Ryan has particular experience with defense and intelligence contracting, as well as with cybersecurity, supply chain, artificial intelligence, and software development requirements.

Ryan also advises on Federal Acquisition Regulation (FAR) and Defense Federal Acquisition Regulation Supplement (DFARS) compliance, public policy matters, agency disputes, and government cost accounting, drawing on his prior experience in providing overall direction for the federal contracting system to offer insight on the practical implications of regulations. He has assisted industry clients with the resolution of complex civil and criminal investigations by the Department of Justice, and he regularly speaks and writes on government contracts, cybersecurity, national security, and emerging technology topics.

Ryan is especially experienced with:

Government cybersecurity standards, including the Federal Risk and Authorization Management Program (FedRAMP); DFARS 252.204-7012, DFARS 252.204-7020, and other agency cybersecurity requirements; National Institute of Standards and Technology (NIST) publications, such as NIST SP 800-171; and the Cybersecurity Maturity Model Certification (CMMC) program.
Software and artificial intelligence (AI) requirements, including federal secure software development frameworks and software security attestations; software bill of materials requirements; and current and forthcoming AI data disclosure, validation, and configuration requirements, including unique requirements that are applicable to the use of large language models (LLMs) and dual use foundation models.
Supply chain requirements, including Section 889 of the FY19 National Defense Authorization Act; restrictions on covered semiconductors and printed circuit boards; Information and Communications Technology and Services (ICTS) restrictions; and federal exclusionary authorities, such as matters relating to the Federal Acquisition Security Council (FASC).
Information handling, marking, and dissemination requirements, including those relating to Covered Defense Information (CDI) and Controlled Unclassified Information (CUI).
Federal Cost Accounting Standards and FAR Part 31 allocation and reimbursement requirements.

Prior to joining Covington, Ryan served in the Office of Federal Procurement Policy in the Executive Office of the President, where he focused on the development and implementation of government-wide contracting regulations and administrative actions affecting more than $400 billion dollars’ worth of goods and services each year.  While in government, Ryan helped develop several contracting-related Executive Orders, and worked with White House and agency officials on regulatory and policy matters affecting contractor disclosure and agency responsibility determinations, labor and employment issues, IT contracting, commercial item acquisitions, performance contracting, schedule contracting and interagency acquisitions, competition requirements, and suspension and debarment, among others.  Additionally, Ryan was selected to serve on a core team that led reform of security processes affecting federal background investigations for cleared federal employees and contractors in the wake of significant issues affecting the program.  These efforts resulted in the establishment of a semi-autonomous U.S. Government agency to conduct and manage background investigations.

Photo of August Gweon August Gweon

August Gweon counsels national and multinational companies on data privacy, cybersecurity, antitrust, and technology policy issues, including issues related to artificial intelligence and other emerging technologies. August leverages his experiences in AI and technology policy to help clients understand complex technology developments, risks…

August Gweon counsels national and multinational companies on data privacy, cybersecurity, antitrust, and technology policy issues, including issues related to artificial intelligence and other emerging technologies. August leverages his experiences in AI and technology policy to help clients understand complex technology developments, risks, and policy trends.

August regularly provides advice to clients on privacy and competition frameworks and AI regulations, with an increasing focus on U.S. state AI legislative developments and trends related to synthetic content, automated decision-making, and generative AI. He also assists clients in assessing federal and state privacy regulations like the California Privacy Rights Act, responding to government inquiries and investigations, and engaging in public policy discussions and rulemaking processes.