The field of artificial intelligence (“AI”) is at a tipping point. Governments and industries are under increasing pressure to forecast and guide the evolution of a technology that promises to transform our economies and societies. In this series, our lawyers and advisors provide an overview of the policy approaches and regulatory frameworks for AI in jurisdictions around the world. Given the rapid pace of technological and policy developments in this area, the articles in this series should be viewed as snapshots in time, reflecting the current policy environment and priorities in each jurisdiction.

The following article examines the state of play in AI policy and regulation in the United States. The previous article in this series covered the European Union.

Future of AI Policy in the U.S.

U.S. policymakers are focused on artificial intelligence (AI) platforms as they explode into the mainstream.  AI has emerged as an active policy space across Congress and the Biden Administration, as officials scramble to educate themselves on the technology while crafting legislation, rules, and other measures to balance U.S. innovation leadership with national security priorities.

Over the past year, AI issues have drawn bipartisan interest and support.  House and Senate committees have held nearly three dozen hearings on AI this year alone, and more than 30 AI-focused bills have been introduced so far this Congress.  Two bipartisan groups of Senators have announced separate frameworks for comprehensive AI legislation.  Several AI bills—largely focused on the federal government’s internal use of AI—have also been voted on and passed through committees. 

Meanwhile, the Biden Administration has announced plans to issue a comprehensive executive order this fall to address a range of AI risks under existing law.  The Administration has also taken steps to promote the responsible development and deployment of AI systems, including securing voluntary commitments regarding AI safety and transparency from 15 technology companies. 

Despite strong bipartisan interest in AI regulation, commitment from leaders of major technology companies engaged in AI R&D, and broad support from the general public, passing comprehensive AI legislation remains a challenge.  No consensus has emerged around either substance or process, with different groups of Members, particularly in the Senate, developing their own versions of AI legislation through different procedures.  In the House, a bipartisan bill would punt the issue of comprehensive regulation to the executive branch, creating a blue-ribbon commission to study the issue and make recommendations.

I. Major Policy & Regulatory Initiatives

Three versions of a comprehensive AI regulatory regime have emerged in Congress – two in the Senate and one in the House.  We preview these proposals below.

            A. SAFE Innovation: Values-Based Framework and New Legislative Process

In June, Senate Majority Leader Chuck Schumer (D-NY) unveiled a new bipartisan proposal—with Senators Martin Heinrich (D-NM), Todd Young (R-IN), and Mike Rounds (R-SD)—to develop legislation to promote and regulate artificial intelligence.  Leader Schumer proposed a plan to boost U.S. global competitiveness in AI development, while ensuring appropriate protections for consumers and workers.

Leader Schumer and his working group announced the SAFE Innovation Framework, five policy principles designed to encourage domestic AI innovation while ensuring adequate guardrails to protect national security, democracy, and public safety.  These principles include:   

  • Security:  Protect national security and promote economic security for workers by addressing threat of job displacement.
  • Accountability: Ensure transparent and responsible AI systems and hold accountable those who promote misinformation, engage in bias, or infringe IP.
  • Foundations: Support development of algorithms and guardrails that protect democracy and promote foundational American values, including liberty, civil rights, and justice.
  • Explainability: Regulations should require disclosures from AI developers to educate the public about AI systems, data, and content.
  • Innovation: Regulations must promote U.S. global technology leadership.

Procedurally, Leader Schumer argued that the complexities of evolving technology require education of policymakers beyond the traditional committee hearing process.  Instead, he announced that he would convene a series of AI Insight Forums—closed-door sessions with Senators and AI experts, including industry leaders, interest groups, AI developers, and other stakeholders.

While Leader Schumer emphasized that the Insight Forums would not replace traditional congressional committee hearings and markups, he said that those tools alone are insufficient to create the “right policies.”  

The first AI Insight Forum was held on September 13, featuring civil rights, labor groups, and the creative community, as well as the leaders of major technology companies engaged in AI R&D.

Leader Schumer said that the process has no fixed timeline, but that he expects to draft legislation within the next few months.

            B. Licensing Framework

Separate from Leader Schumer’s effort, Senators Richard Blumenthal (D-CT) and Josh Hawley (R-MO), the chair and ranking member of the Senate Judiciary Subcommittee on Privacy, Technology, and the Law, announced their own framework for AI regulation in September.  The Blumenthal-Hawley approach focuses on transparency and accountability to address potential harms of AI and protect personal data of consumers.

Unlike the SAFE Innovation framework, which aims to develop consensus legislation based on guiding principles, the Blumenthal-Hawley framework proposes several specific policies alongside broad principles, drawing on the multiple AI-related hearings the two senators have held in the Privacy Subcommittee this year.  Specifically, this consumer privacy-focused framework would:

  • Create an independent oversight body to administer a registration and licensing process for companies developing “sophisticated general purpose AI models” and models to be used in certain “high risk situations.”
  • Eliminate Section 230 immunity for AI-generated content.  This proposal follows legislation Senators Blumenthal and Hawley introduced in June, the No Section 230 Immunity for AI Act, which would deny section 230 immunity to internet platforms for damages from AI-generated content.
  • Increase National Security protections, including export controls, sanctions, and other restrictions to prevent foreign adversaries from obtaining advanced AI technologies.
  • Promote transparency, including requiring AI developers to disclose training data and other key information to users and other stakeholders, requiring disclaimers when users are interacting with AI systems, and publicly disclosing adverse incidents or AI system failures.
  • Protect consumers, including increased control over personal data used in AI systems and strict limitations on generative AI involving children. 

Senators Blumenthal and Hawley said they will develop legislation to implement the framework by the end of this year. 

            C. Blue-Ribbon Commission  

While the Senate engages in legislative fact-finding and drafting of concrete proposals based on “frameworks,” a bipartisan group of House members have introduced legislation to adopt an alternative approach.  The National AI Commission Act (H.R. 4223)—introduced in June by Representatives Ted Lieu (D-CA), Ken Buck (R-CO), and Anna Eshoo (D-CA) and five additional colleagues (2 Republicans and 3 Democrats)—would establish a bipartisan commission of experts with backgrounds in computer science or AI technology, civil society, industry and workforce issues, and government (including national security) to “review the United States’ current approach to AI regulation,” make recommendations for a risk-based AI regulatory framework and the structures necessary to implement them.

The President and congressional leaders would appoint 20 members to the commission, with each political party selecting half of the members.  Once all members of the commission are appointed, the commission would have to release an interim report within six months, a final report six months after the interim report, and a follow-up report one year after the final report.

While Senator Brian Schatz (D-HI) joined the House press release announcing the introduction of the bill, a Senate companion has not been formally introduced.

            D. Targeted Bipartisan Legislation

In addition to the bipartisan frameworks, several other bipartisan AI bills on targeted subject matter have been introduced, some of which have advanced through the committee process.  Subject-specific bills generally fall into six major categories: (1) promoting AI R&D leadership, (2) protecting national security, (3) disclosure; (4) guarding against the use of AI-generated “deepfakes” in elections and artistic performances; (5) workforce training, and (6) coordinating and facilitating federal agency AI use.

1. Promoting AI R&D Leadership

Members in both Houses have introduced legislation to promote U.S. leadership in AI R&D.  The Creating Resources for Every American to Experiment (CREATE) with Artificial Intelligence Act (CREATE AI Act) (S. 2714/H.R. 5077) is bipartisan, bicameral legislation led by Senators Martin Heinrich (D-NM), Todd Young (R-IN), Cory Booker (D-NJ), and Mike Rounds (R-SD) and Representatives Anna Eshoo (D-CA), Michael McCaul (R-TX), Don Beyer (D-VA), and Jay Obernolte (R-CA)—that would establish the National Artificial Intelligence Research Resource (NAIRR).  The NAIRR would provide software, data, tools and services, AI testbeds, and other resources to facilitate AI research by higher education institutions, non-profits, and other federal funding recipients.

2. Protecting National Security

Several bipartisan bills have been introduced to require government agencies to prepare for health crises or cyber attacks facilitated by AI and other emerging technologies.  These include:

  • The Block Nuclear Launch by Autonomous Artificial Intelligence Act (S. 1394/H.R. 2894)—introduced by Senators Ed Markey (D-MA), Elizabeth Warren (D-MA), Jeff Merkley (D-OR), and Bernie Sanders (I-VT), and Representatives Ted Lieu (D-CA), Ken Buck (R-CO), Don Beyer (D-VA), Jim McGovern (D-MA), and Jill Tokuida (D-HI)—would prohibit the use of federal funds to use any AI or other autonomous system to launch a nuclear weapon or select or engage targets of a nuclear weapon, without “meaningful human control.”  
  • The Artificial Intelligence and Biosecurity Risk Assessment Act (S. 2399/H.R. 4704), introduced by Senators Ed Markey (D-MA) and Ted Budd (R-NC), and Representatives Anna Eshoo (D-CA) and Dan Crenshaw (R-TX) would require the Health and Human Services Department to conduct risk assessments and implement strategies to address threat posed to public health and national security by AI and other technology advancements.
  • Senator Richard Blumenthal (D-CT) and Representatives Michael McCaul (R-TX), Gregory Meeks (D-NY), Jared Moskowitz (D-FL), Thomas Kean (R-NJ) and Del. Aumua Amata Coleman Radewagen (R-American Samoa) introduced a bill (S. 1394/H.R. 2894) in February to require the State Department to report to Congress on efforts to implement the advanced capabilities component of the trilateral security partnership between Australia, the United Kingdom, and the United States, including on advanced capabilities such as artificial intelligence.  The bill passed the House in March, 393-4 (under suspension of the rules), but remains in the Foreign Relations Committee in the Senate.
  • The AI for National Security Act (H.R. 1718), introduced by Representatives Jay Obernolte (R-CA), Jimmy Panetta (D-CA), and Patrick Ryan (D-NY), would update Defense Department procurement laws to allow the procurement of AI-enabled cybersecurity measures.

3. Disclosure

Several bills have also been introduced to require disclosure of AI-generated products, through a disclaimer requirement or other markings.  Bipartisan disclosure measures include the AI Labeling Act (S. 2691), a bipartisan bill introduced by Senators Brian Schatz (D-HI) and John Kennedy (R-LA) that would require all generative AI systems to include a “clear and conspicuous disclosure” that, to the extent feasible is “permanent and unable to be easily removed by subsequent users,” identifies content as AI-generated.

4. Guarding against “Deepfakes”

 The growth of AI has stoked fear of “deepfakes”—AI-generated audiovisual content that appropriates the voice and likeness of individuals without their consent—particularly in elections and artistic pursuits.  Political campaigns and foreign actors, for example, could use AI systems to generate “deepfake” images or videos to influence elections. 

Speaking at a recent Senate Rules Committee hearing on AI and elections, Leader Schumer emphasized the importance of AI guardrails to protect democracy, and committed to ensuring elections are a focus of a future AI Insight Forum.  Election-related AI legislation already introduced includes: 

  • The Protect Elections from Deceptive AI Act (S. 2770), led by Senators Amy Klobuchar (D-MN), Josh Hawley (R-MO), Chris Coons (D-DE), and Susan Collins (R-ME) that would prohibit the distribution of materially deceptive AI-generated content in ads related to a federal election.  The bill would also allow targeted candidates to seek removal of the content and recover damages.
  • The Require the Exposure of AI-Led (REAL) Political Advertisements Act (S. 1596/H.R. 3044)—sponsored by Senators Amy Klobuchar (D-MN), Cory Booker (D-NJ), and Michael Bennet (D-CO), and Representative Yvette Clarke (D-NY)—which would require all political ads that include AI-generated content to display a disclaimer identifying content as AI-generated.

Lawmakers are also concerned about the use of AI in art and advertising, such as unauthorized celebrity endorsements of products, or AI-generated music featuring the voices of specific artists without their consent.  Earlier this month, Senators Chris Coons (D-DE), Marsha Blackburn (R-TN), Amy Klobuchar (D-MN), and Thom Tillis (R-NC) released a discussion draft of their Nurture Originals, Foster Art, and Keep Entertainment Safe (NO FAKES) Act, which would impose liability on persons or companies who generate unauthorized digital reproductions of any person engaged in a performance, as well as on platforms hosting such content if they have knowledge that the content was not authorized by the subject.

5. Workforce

Members in both parties are concerned about the impact of AI systems on the American workforce.  One bipartisan House bill, the Jobs of the Future Act (H.R. 4498)—introduced by Representatives Darren Soto (D-FL), Lori Chavez-DeRemer (R-OH), Lisa Blunt Rochester (D-DE), and Andrew Garbarino (R-NY)—would require the Labor Department and the National Science Foundation (NSF) to draft a report for Congress analyzing the impact of AI on American workers.

6. Coordinating and Facilitating Federal Agency AI Use

Several bipartisan bills, including bills that have passed committee, relate to the federal government’s use of AI for its own purposes, either to facilitate services or to advise the public when an agency may use AI systems.  These include:

  • The AI LEAD Act (S. 2293), sponsored by Senators Gary Peters (D-MI) and John Cornyn (R-TX) would establish the position of Chief Artificial Intelligence Officer at each federal agency, who would “ensure the responsible research, development, acquisition, application, governance, and use” of AI by the agency.  The bill passed the Senate Homeland Security and Governmental Affairs Committee (HSGAC) in July, but it has not yet been considered on the Senate floor.
  • The AI Leadership Training Act (S. 1564), sponsored by Senators Gary Peters (D-MI) and Mike Braun (R-IN), would require the Office of Personnel Management to establish an AI training program for federal agency management and supervisory employees.  This bill passed out of HSGAC in May, but has not been considered on the Senate floor.
  • The AI Training Expansion Act (H.R. 4503), sponsored by Representatives Nancy Mace (R-SC) and Gerald Connolly (D-VA), would expand AI training within the executive branch.  The bill passed the House Oversight and Accountability Committee in July on a bipartisan 39-2 vote, but has not been considered on the floor.
  • The Transparent Automated Governance Act (S. 1865), introduced by Senators Gary Peters (D-MI), Mike Braun (R-IN), and James Lankford (R-OK), would require federal agencies to notify individuals whenever they are interacting with AI or other automated systems, or where such systems are making critical decisions.  The bill would also create an appeals process to ensure human-review of AI-generated decisions. 
  • The Consumer Safety Technology Act (H.R. 4814), a partisan bill—led by Representatives Darren Soto (D-FL), Michael Burgess (R-TX), Lori Trahan (D-MA), and Brett Guthrie (R-KY)—that would require the Consumer Products Safety Commission to establish a pilot program for exploring the use of AI to support its mission.

III. What’s Next?

            A. Legislative Outlook

Without an emerging legislative consensus, the future of comprehensive AI legislation remains uncertain.  However, more than a dozen bipartisan bills have been introduced on a range of specific AI-related topics in both chambers of Congress.  Targeted legislation introduced so far includes bills to promote U.S. leadership in AI R&D, to protect national security, to compel disclosure of AI use, to secure U.S. elections from deepfakes and other AI-generated misinformation, address the impact of AI on U.S. workers, and help the federal government leverage AI to deliver services.  With bipartisan support and widespread interest in AI issues, it is likely that at least some targeted AI legislation could become law in the near future.

            B. Executive Branch Developments

As Congress develops comprehensive AI legislation through hearings and working groups and advances narrower AI bills, the Biden Administration has taken concrete steps toward AI regulation using both existing legal authorities and the bully pulpit to address AI issues and promote responsible AI development and deployment.   

President Biden is expected to issue a comprehensive executive order addressing AI risks in the coming weeks.  While the Administration has not released details of its anticipated order, Dr. Arati Prabhakar, Director of the White House Office of Science and Technology Policy (OSTP), appearing at a September event on Building Responsible AI sponsored by the Information Technology Industry Council (ITI), said that the order will be “broad” and will reflect “everything that the President sees as possible under existing law to get better at managing risk and using the technology.”

Separately, the White House has been leading a months-long initiative to secure voluntary commitments from AI companies to mitigate risks, including commitments to safety testing and information sharing, investments in cybersecurity safeguards, and transparency.  Fifteen major technology companies have taken the White House pledge so far (seven in July, followed by eight more in September). 

The National Telecommunications and Information Administration (NTIA) is taking an active role in studying and developing policy recommendations for AI accountability.  Most notably, in April 2023 it issued a request for comment (“RFC”) asking stakeholders to suggest policies the Administration can advance to assure the public that AI systems are “legal, effective, safe, and otherwise trustworthy.”­ NTIA’s work in this area has attracted significant public input and attention, with the agency receiving more than 1,400 comments in response to the RFC. NTIA has explained that it will use these comments and other inputs to inform the agency’s forthcoming report making policy recommendations for “mechanisms that can create earned trust in AI systems.”

Following a directive from Congress (section 5301 of the NDAA for FY2021), the National Institute of Standards and Technology (NIST) released its AI Risk Management Framework (AI RMF 1.0) in January 2023.  The AI RMF is voluntary guidance for public and private organizations designed to provide “standards, guidelines, best practices, methodologies, procedures, and processes” for developing trustworthy AI systems, assessing those systems, and mitigating risks from AI systems.  NIST collaborated with both government and private stakeholders to develop the framework, including several rounds of public comment.

Other agencies across the Executive Branch are engaged in efforts to regulate AI systems, advance U.S. leadership in AI innovation, and enforce existing laws in the evolving AI ecosystem. While agency initiatives are constantly evolving, some significant actions the Administration has taken in 2023 so far include:

  • In February, the U.S. Patent and Trademark Office (USPTO) issued a request for comment seeking public “input on the current state of AI technologies and inventorship issues that may arise in view of the advancement of such technologies, especially as AI plays a greater role in the innovation process.” The USPTO received 69 comments in response to the request, including on a range of questions about the use of AI in invention.
  • In April, four federal agencies—the Consumer Financial Protection Bureau, the Justice Department, the Equal Employment Opportunity Commission, and the Federal Trade Commission released a joint statement on their commitment to using existing law to prevent bias and discrimination in AI, describing how AI falls within these agencies’ civil rights enforcement authorities. The agencies “pledge to vigorously use our collective authorities to protect individuals’ rights regardless of whether legal violations occur through traditional means or advanced technologies.”
  • In May, the Department of Education released a report on the risks and opportunities related to AI in teaching, research, and assessment.
  • Also in May, NSF announced $140 million in funding to launch new National AI Research Institutes focused on six major research areas, including trustworthy AI, AI for cybersecurity, and AI for “smart climate” applications.
  • The Commerce Department’s National AI Advisory Committee delivered its first report to President Biden in May.
  • In July, the NIST launched a new public working group to build on the AI RMF.
  • In August, the U.S. Copyright Office issued a “notice of inquiry” seeking public comment on fair use issues and status of AI outputs to “help assess whether legislative or regulatory steps in this area are warranted.”  In September, the Office extended the deadline for initial comments to October 30 and reply comments to November 29.
  • In August, the Federal Election Commission published a notice seeking public comment on whether to start a rulemaking related on regulation of AI in campaign advertisements.

These actions are not an exhaustive list of measures the Administration has taken so far to address AI. Other agencies have also taken steps to use existing funding streams to invest in AI R&D, to issue reports or solicit public comments on AI issues within their jurisdiction, to bring enforcement actions against AI companies for violations of existing law, and other actions. We expect this uptick in Executive Branch activity will continue in parallel with legislative efforts in Congress.

            C. Geopolitical Competition and AI

Congress is particularly focused on competition with China for technology leadership, and has taken steps to both promote U.S. innovation in foundational technologies, such as AI, and to restrict the transfer of critical emerging technologies to “foreign entities of concern,” including China. 

In July, the Senate voted 91-6 to add the Outbound Investment Transparency Act, which covers AI, as an amendment to the FY2024 National Defense Authorization Act (NDAA).  The bill would requires notification to the Treasury Department of certain foreign investment activities involving AI, as well as semiconductors, quantum computers, and other sensitive technologies.  While the House-passed NDAA does not include any outbound investment provisions, some Members of the House are advocating for imposing stricter sanctions on companies in China. 

The Biden Administration has also taken its own action to address outbound investments in “countries of concern.”  President Biden issued an executive order in August imposing restrictions on U.S. persons undertaking certain outbound transactions involving national security-sensitive technologies in the artificial intelligence, semiconductor, and quantum computing sectors.  The order—which will be implemented by regulations issued by the Treasury Department—prohibits certain transactions and requires U.S. parties engaged in other transactions to notify the Treasury Department.  We expect the NDAA conference process will include efforts to codify and enhance the rules proposed in the executive order.  Legislation that codifies or modifies the order would give Congress a greater role in oversight of investment restrictions on key technologies like AI.

IV. Thought Leadership

Our public policy and regulatory teams closely track and contribute to the discussion around AI policy in the United States.  Below is a sampling of related articles on our public-facing blogs:

*          *          *

Photo of Holly Fechner Holly Fechner

Holly Fechner advises clients on complex public policy matters that combine legal and political opportunities and risks. She leads teams that represent companies, entities, and organizations in significant policy and regulatory matters before Congress and the Executive Branch.

She is a co-chair of…

Holly Fechner advises clients on complex public policy matters that combine legal and political opportunities and risks. She leads teams that represent companies, entities, and organizations in significant policy and regulatory matters before Congress and the Executive Branch.

She is a co-chair of the Covington’s Technology Industry Group and a member of the Covington Political Action Committee board of directors.

Holly works with clients to:

  • Develop compelling public policy strategies
  • Research law and draft legislation and policy
  • Draft testimony, comments, fact sheets, letters and other documents
  • Advocate before Congress and the Executive Branch
  • Form and manage coalitions
  • Develop communications strategies

She is the Executive Director of Invent Together and a visiting lecturer at the Harvard Kennedy School of Government. She serves on the board of directors of the American Constitution Society.

Holly served as Policy Director for Senator Edward M. Kennedy (D-MA) and Chief Labor and Pensions Counsel for the Senate Health, Education, Labor & Pensions Committee.

She received The American Lawyer, “Dealmaker of the Year” award in 2019. The Hill named her a “Top Lobbyist” from 2013 to the present, and she has been ranked by Chambers USAAmerica’s Leading Business Lawyers from 2012 to the present. One client noted to Chambers: “Holly is an exceptional attorney who excels in government relations and policy discussions. She has an incisive analytical skill set which gives her the capability of understanding extremely complex legal and institutional matters.” According to another client surveyed by Chambers, “Holly is incredibly intelligent, effective and responsive. She also leads the team in a way that brings out everyone’s best work.”

Photo of Jennifer Johnson Jennifer Johnson

Jennifer Johnson is a partner specializing in communications, media and technology matters who serves as Co-Chair of Covington’s Technology Industry Group and its global and multi-disciplinary Artificial Intelligence (AI) and Internet of Things (IoT) Groups. She represents and advises technology companies, content distributors…

Jennifer Johnson is a partner specializing in communications, media and technology matters who serves as Co-Chair of Covington’s Technology Industry Group and its global and multi-disciplinary Artificial Intelligence (AI) and Internet of Things (IoT) Groups. She represents and advises technology companies, content distributors, television companies, trade associations, and other entities on a wide range of media and technology matters. Jennifer has almost three decades of experience advising clients in the communications, media and technology sectors, and has held leadership roles in these practices for almost twenty years. On technology issues, she collaborates with Covington’s global, multi-disciplinary team to assist companies navigating the complex statutory and regulatory constructs surrounding this evolving area, including product counseling and technology transactions related to connected and autonomous vehicles, internet connected devices, artificial intelligence, smart ecosystems, and other IoT products and services. Jennifer serves on the Board of Editors of The Journal of Robotics, Artificial Intelligence & Law.

Jennifer assists clients in developing and pursuing strategic business and policy objectives before the Federal Communications Commission (FCC) and Congress and through transactions and other business arrangements. She regularly advises clients on FCC regulatory matters and advocates frequently before the FCC. Jennifer has extensive experience negotiating content acquisition and distribution agreements for media and technology companies, including program distribution agreements, network affiliation and other program rights agreements, and agreements providing for the aggregation and distribution of content on over-the-top app-based platforms. She also assists investment clients in structuring, evaluating, and pursuing potential investments in media and technology companies.

Photo of Micaela McMurrough Micaela McMurrough

Micaela McMurrough serves as co-chair of Covington’s global and multi-disciplinary Technology Group, as co-chair of the Artificial Intelligence and Internet of Things (IoT) initiative. In her practice, she has represented clients in high-stakes antitrust, patent, trade secrets, contract, and securities litigation, and other…

Micaela McMurrough serves as co-chair of Covington’s global and multi-disciplinary Technology Group, as co-chair of the Artificial Intelligence and Internet of Things (IoT) initiative. In her practice, she has represented clients in high-stakes antitrust, patent, trade secrets, contract, and securities litigation, and other complex commercial litigation matters, and she regularly represents and advises domestic and international clients on cybersecurity and data privacy issues, including cybersecurity investigations and cyber incident response. Micaela has advised clients on data breaches and other network intrusions, conducted cybersecurity investigations, and advised clients regarding evolving cybersecurity regulations and cybersecurity norms in the context of international law.

In 2016, Micaela was selected as one of thirteen Madison Policy Forum Military-Business Cybersecurity Fellows. She regularly engages with government, military, and business leaders in the cybersecurity industry in an effort to develop national strategies for complex cyber issues and policy challenges. Micaela previously served as a United States Presidential Leadership Scholar, principally responsible for launching a program to familiarize federal judges with various aspects of the U.S. national security structure and national intelligence community.

Prior to her legal career, Micaela served in the Military Intelligence Branch of the United States Army. She served as Intelligence Officer of a 1,200-member maneuver unit conducting combat operations in Afghanistan and was awarded the Bronze Star.

Photo of Nicholas Xenakis Nicholas Xenakis

Nick Xenakis draws on his Capitol Hill experience to provide regulatory and legislative advice to clients in a range of industries, including technology. He has particular expertise in matters involving the Judiciary Committees, such as intellectual property, antitrust, national security, immigration, and criminal…

Nick Xenakis draws on his Capitol Hill experience to provide regulatory and legislative advice to clients in a range of industries, including technology. He has particular expertise in matters involving the Judiciary Committees, such as intellectual property, antitrust, national security, immigration, and criminal justice.

Nick joined the firm’s Public Policy practice after serving most recently as Chief Counsel for Senator Dianne Feinstein (D-CA) and Staff Director of the Senate Judiciary Committee’s Human Rights and the Law Subcommittee, where he was responsible for managing the subcommittee and Senator Feinstein’s Judiciary staff. He also advised the Senator on all nominations, legislation, and oversight matters before the committee.

Previously, Nick was the General Counsel for the Senate Judiciary Committee, where he managed committee staff and directed legislative and policy efforts on all issues in the Committee’s jurisdiction. He also participated in key judicial and Cabinet confirmations, including of an Attorney General and two Supreme Court Justices. Nick was also responsible for managing a broad range of committee equities in larger legislation, including appropriations, COVID-relief packages, and the National Defense Authorization Act.

Before his time on Capitol Hill, Nick served as an attorney with the Federal Public Defender’s Office for the Eastern District of Virginia. There he represented indigent clients charged with misdemeanor, felony, and capital offenses in federal court throughout all stages of litigation, including trial and appeal. He also coordinated district-wide habeas litigation following the Supreme Court’s decision in Johnson v. United States (invalidating the residual clause of the Armed Career Criminal Act).

Photo of Matthew Shapanka Matthew Shapanka

Matthew Shapanka practices at the intersection of law, policy, and politics, advising clients on important legislative, regulatory and enforcement matters before Congress, state legislatures, and government agencies that present significant legal, political, and business opportunities and risks.

Drawing on more than 15 years…

Matthew Shapanka practices at the intersection of law, policy, and politics, advising clients on important legislative, regulatory and enforcement matters before Congress, state legislatures, and government agencies that present significant legal, political, and business opportunities and risks.

Drawing on more than 15 years of experience on Capitol Hill, private practice, state government, and political campaigns, Matt develops and executes complex, multifaceted public policy initiatives for clients seeking actions by Congress, state legislatures, and federal and state government agencies. He regularly counsels businesses—especially technology companies—on matters involving intellectual property, national security, and regulation of critical and emerging technologies like artificial intelligence and autonomous vehicles.

Matt rejoined Covington after serving as Chief Counsel for the U.S. Senate Committee on Rules and Administration, where he advised Chairwoman Amy Klobuchar (D-MN) on all legal, policy, and oversight matters before the Committee, particularly federal election and campaign finance law, Federal Election Commission nominations, and oversight of the legislative branch, including U.S. Capitol security after the January 6, 2021 attack and the rules and procedures governing the Senate. Most significantly, Matt led the Committee’s staff work on the Electoral Count Reform Act – a landmark bipartisan law that updates the procedures for certifying and counting votes in presidential elections—and the Committee’s joint bipartisan investigation (with the Homeland Security Committee) into the security planning and response to the January 6th attack.

Both in Congress and at Covington, Matt has prepared dozens of corporate and nonprofit executives, academics, government officials, and presidential nominees for testimony at congressional committee hearings and depositions. He is a skilled legislative drafter who has composed dozens of bills and amendments introduced in Congress and state legislatures, including several that have been enacted into law across multiple policy areas. Matt also leads the firm’s state policy practice, advising clients on complex multistate legislative and regulatory policy matters and managing state advocacy efforts.

In addition to his policy work, Matt advises and represents clients on the full range of political law compliance and enforcement matters involving federal election, campaign finance, lobbying, and government ethics laws, the Securities and Exchange Commission’s “Pay-to-Play” rule, and the election and political laws of states and municipalities across the country.

Before law school, Matt served in the administration of former Governor Deval Patrick (D-MA) as a research analyst in the Massachusetts Recovery & Reinvestment Office, where he worked on policy, communications, and compliance matters for federal economic recovery funding awarded to the state. He has also staffed federal, state, and local political candidates in Massachusetts and New Hampshire.

Photo of Jayne Ponder Jayne Ponder

Jayne Ponder counsels national and multinational companies across industries on data privacy, cybersecurity, and emerging technologies, including Artificial Intelligence and Internet of Things.

In particular, Jayne advises clients on compliance with federal, state, and global privacy frameworks, and counsels clients on navigating the…

Jayne Ponder counsels national and multinational companies across industries on data privacy, cybersecurity, and emerging technologies, including Artificial Intelligence and Internet of Things.

In particular, Jayne advises clients on compliance with federal, state, and global privacy frameworks, and counsels clients on navigating the rapidly evolving legal landscape. Her practice includes partnering with clients on the design of new products and services, drafting and negotiating privacy terms with vendors and third parties, developing privacy notices and consent forms, and helping clients design governance programs for the development and deployment of Artificial Intelligence and Internet of Things technologies.

Jayne routinely represents clients in privacy and consumer protection enforcement actions brought by the Federal Trade Commission and state attorneys general, including related to data privacy and advertising topics. She also helps clients articulate their perspectives through the rulemaking processes led by state regulators and privacy agencies.

As part of her practice, Jayne advises companies on cybersecurity incident preparedness and response, including by drafting, revising, and testing incident response plans, conducting cybersecurity gap assessments, engaging vendors, and analyzing obligations under breach notification laws following an incident.