This quarterly update highlights key legislative, regulatory, and litigation developments in the first quarter of 2025 related to artificial intelligence (“AI”), connected and automated vehicles (“CAVs”), and cryptocurrencies and blockchain. 

I. Artificial Intelligence

I.  Federal Legislative Developments

In the first quarter, members of Congress introduced several AI bills addressing national security, including bills that would encourage the use of AI for border security and drug enforcement purposes.  Other AI legislative proposes focused on workforce skills, international investment in critical industries, U.S. AI supply chain resilience, and AI-enabled fraud.  Notably, members of Congress from both parties advanced legislation to regulate AI deepfakes and codify the National AI Research Resource, as discussed below.

  • CREATE AI Act:  In March, Reps. Jay Obernolte (R-CA) and Don Beyer (D-VA) re-introduced the Creating Resources for Every American To Experiment with Artificial Intelligence (“CREATE AI”) Act (H.R. 2385), following its introduction and near passage in the Senate last year.  The CREATE AI Act would codify the National AI Research Resource (“NAIRR”), with the goal of advancing AI development and innovation by offering AI computational resources, common datasets and repositories, educational tools and services, and AI testbeds to individuals, private entities, and federal agencies.  The CREATE AI Act builds on the work of the NAIRR Task Force, established by the National AI Initiative Act of 2020, which issued a final report in January 2023 recommending the establishment of NAIRR.

II. Federal Regulatory Developments

Following President Trump’s return to the White House, the Executive Branch reversed many of the Biden Administration’s AI policies and charted a new course for U.S. AI policy focused on bolstering national security and innovation. 

  • The White House:  The Trump Administration has made significant changes to the White House’s approach to AI.  On January 20, President Trump issued Executive Order 14148, revoking President Biden’s 2023 Executive Order 14110 on the “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.”  On January 23, President Trump signed Executive Order 14179 on “Removing Barriers to American Leadership in Artificial Intelligence.”  Among other things, EO 14179 requires the development of an ”AI Action Plan” to implement its policy of “sustain[ing] and enhanc[ing] America’s global AI dominance.”  On February 6, the White House Office of Science & Technology Policy (“OSTP”) issuedRequest for Information (RFI) seeking public input on the AI Action Plan required by EO 14179.  The RFI sought comment on 20 AI policy topics, including energy consumption, technical and safety standards, and intellectual property.  The comment period closed on March 15.
  • Department of Commerce:  On March 25, the Department of Commerce’s Bureau of Industry and Security (“BIS”) added 80 entities to the Entity List, including entities from China, United Arab Emirates, South Africa, Iran, Taiwan, with the goal of restricting the use of U.S. AI and other technologies for military applications.  The announcement follows BIS’s January 13 interim final rule (the “AI Diffusion Rule”), which would expand the Export Administration Regulation (“EAR”)’s controls on the export and transfer of advanced integrated circuits and closed-weight dual-use AI models, and would impose global licensing requirements on AI model weights.  The AI Diffusion Rule is scheduled to come into effect on May 15. 
  • National Institute of Standards and Technology:  On February 14, the National Institute of Standards and Technology (“NIST”) announced the creation of a new “Community Profile” to provide risk management guidance related to “Cybersecurity of AI Systems, AI-enabled Cyber Attacks, and AI-enabled Cyber Defense” (the “Cyber AI Profile”), and published a concept paper on the Cyber AI Profile’s risk management approaches.  On March 24, NIST released its final report on “Adversarial Machine Learning,” with voluntary guidance on methods for securing AI systems against adversarial manipulations and attacks.
  • Federal Trade Commission:  On February 11, the FTC issued a final decision and order against DoNotPay, a provider of a “robot lawyer” online subscription service, over allegations that DoNotPay made unsubstantiated claims that its service was an adequate substitute for human lawyers.  On March 11, the FTC announced a settlement with Evolv Technologies over allegations that the company made false claims about its AI-powered security screening system.  On April 10, the Senate voted, 50-46, to confirm Mark Meador as the newest FTC Commissioner, following the Senate Commerce, Science, and Transportation Committee’s vote to advance his nomination on March 12.  During his Committee hearing, Meador stated that the FTC should use its existing consumer protection authorities to address AI-related harms like deepfake pornography, and should “fully enforce the competition laws” to ensure that consumer have choices other than AI platforms with “political bias.”  Meador’s confirmation comes after President Trump fired FTC Commissioners Alvaro Bedoya and Rebecca Kelly Slaughter on March 18.
  • Copyright Office:  On January 29, the U.S. Copyright Office released Part 2 of its report on copyright and AI, focusing on the copyrightability of AI-generated outputs.  Among other things, the report found that “questions of copyrightability and AI can be resolved pursuant to existing law” and that copyright protections do “not extend to purely AI-generated material or material where there is insufficient human control over the expressive elements.”  The report noted that “whether human contributions to AI-generated outputs are sufficient to constitute authorship must be analyzed on a case-by-case basis.”  The Copyright Office released Part 1 of the report, which focused on digital replicas, in July 2024.
  • Securities & Exchange Commission:  On February 20, the Securities and Exchange Commission (“SEC”) announced the creation of the Cyber and Emerging Technologies Unit (“CETU”) to combat cyber-related misconduct and protect retail investors.  According to the SEC, CETU will prioritize enforcement related to “fraud committed using emerging technologies, such as artificial intelligence and machine learning,” among other priority areas.
  • Department of Defense:  On February 5, BigBear.ai announced a contract with the Department of Defense (“DOD”)’s Chief Digital and AI Office to prototype a system for analyzing geopolitical risks posed by near-peer adversaries, with the goal of enhancing DOD’s assessment of and response to global threats using advanced AI analytics.

III. State Legislative Developments

States began their 2025 legislative sessions by introducing hundreds of new AI bills in the first quarter of 2025, including over a dozen AI bills that have been passed and would address algorithmic discrimination, AI-generated CSAM, intimate imagery, and election-related content, generative AI chatbots, and digital replicas.  Additionally, state lawmakers continued to assess new regulations for frontier models, including frontier model legislation in New York, Illinois, Maryland, and California. 

  • Algorithmic Discrimination & Consumer Protection:  In Virginia, the legislature passed, and Governor Glenn Youngkin vetoed, the High-Risk AI Developer & Deployer Act (HB 2094), a similar AI consumer protection bill.  In his veto message, Governor Youngkin noted that “there are many laws currently in place that protect consumers and place responsibilities on companies relating to discriminatory practices, privacy, data use, libel, and more,” while warning that HB 2094 would “put[] an especially onerous burden on smaller firms and startups.”
  • Synthetic Content Laws:  Montana enacted HB 82, prohibiting the possession of AI-generated CSAM or intimate imagery.  South Dakota enacted SB 164, prohibiting the dissemination of deepfakes within 90 days of an election.  Kentucky enacted SB 4, requiring AI disclosures for political ads that contain AI-generated content. 
  • Generative AI and Chatbot Laws:  In Utah, the Governor signed HB 452, requiring businesses that use “mental health chatbots” to interact with individuals to disclose chatbot interactions and advertisements presented through the chatbot to users.  HB 452 also prohibits the sale or sharing of user chatbot inputs or their health information and the use of user chatbot inputs to determine whether to display an advertisement.  Utah also enacted SB 226, requiring businesses to disclose interactions with generative AI if prompted or asked by the user, and requiring providers of “regulated services” to prominently and affirmatively disclose generative AI interactions if the interaction involves the collection of sensitive personal information or the provision of personalized recommendations, advice, or information that could be reasonably be relied upon to make significant personal decisions.
  • Laws Regulating AI-Generated Impersonations & Digital Replicas:  Utah enacted SB 271, prohibiting the non-consensual use of “personal identities,” including reproductions of a person’s likeness, voice, or image created using AI, for commercial purposes if the use expresses or implies the depicted individual’s endorsement, creates a likelihood of confusion about the individual’s association, or creates a false impression that the individual approved of the use.  Like Tennessee’s ELVIS Act (enacted March 2024), SB 271 also prohibits the distribution, sale, or licensing of technologies, software, or tools that have the “primary purpose” of creating or modifying unauthorized reproductions of personal identities.  Similarly, Arkansas enacted HB 1071, prohibiting the unauthorized commercial use of another person’s image, video, three-dimensional generation, or voice generated through means of AI. 
  • Laws Imposing Criminal Penalties for Synthetic Content:  Virginia enacted HB 2124, which prohibits the use of synthetic digital content for committing crimes involving fraud, slander, or libel.  New Jersey enacted A 3540, imposing criminal penalties for the creation of AI-generated audio or visual media with intent to be used as part of any crime or offense. 
  • Frontier Model Public Safety Legislation:  Montana enacted the Right to Compute Act (SB 212), requiring deployers of AI systems that control “critical infrastructure facilities” to develop risk management policies, while also establishing an individual “right to compute.”  California Sen. Scott Weiner introduced SB 53, which would establish whistleblower protections for employees of foundation model developers.  In March, the Joint California Policy Working Group on AI Frontier Models issued a draft report with several recommendations for frontier model regulation, including transparency requirements, third-party risk assessments, whistleblower protections, and adverse event reporting.

II. Connected & Automated Vehicles

In the first quarter, the outgoing Biden Administration took action to regulate connected vehicles (“CVs”).  In light of the new Trump Administration and its January 20 regulatory freeze, however, the future of these actions is uncertain.  President Trump has begun appointing regulators with jurisdiction over CVs, including Sean Duffy, who was confirmed as Secretary of Transportation on January 28, and Jonathan Morrison, who was nominated as Administrator of the National Highway and Safety Administration on February 11. 

  • BIS Connected Vehicle Supply Chain Final Rule: On January 14, BIS released its Final Rule on securing the connected vehicle supply chain.  The Final Rule restricts the import or sale of vehicle connectivity hardware and connected vehicles with software made in, owned, or controlled by China or Russia.  On January 20, President Trump issued a Presidential Memorandum on the “America First Trade Policy,” directing the Secretary of Commerce to “review and recommend appropriate action” with respect to the Final Rule and consider whether controls on technology transactions should be expanded to additional connected products.  On April 3, the White House released an executive summary of these recommendations, which does not mention connected vehicles.
  • NHTSA Proposes AV STEP.  On January 15, the National Highway Traffic Safety Administration (“NHTSA”) issued a Notice of Proposed Rulemaking for the automated driving system (“ADS”)-equipped Vehicle Safety, Transparency, and Evaluation Program (“AV STEP”), a voluntary program for vehicle manufacturers, ADS developers, fleet operators, and system integrators.  AV STEP participants would be required to submit detailed AV-related information to NHTSA and may request exemptions from applicable federal motor vehicle safety standards through a new streamlined process.  NHTSA received comments on AV STEP from 33 entities, including trade groups, AV-related companies, and vehicle manufacturers.  The comment period closed on March 17.
  • FTC Proposed Order Against GM and OnStar:  On January 16, the FTC released a proposed order against General Motors (“GM”) and OnStar based on allegations that they collected, used, and sold drivers’ information precise geolocation and driving behavior data without adequately notifying and obtaining consumers’ affirmative consent.  The order would ban GM, OnStar, and affiliated companies from disclosing consumers’ sensitive geolocation and driving behavior data to consumer reporting agencies, and require GM and OnStar to provide greater transparency and choice to consumers regarding the collection, use, and disclosure of connected vehicle data.
  • The Safe Vehicle Access for Survivors Act:  On March 17, Representatives Debbie Dingell (D-MI) and Dan Crenshaw (R-TX) introduced the Safe Vehicle Access for Survivors Act (H.R. 2110), which would establish a process for domestic abuse survivors to request the termination or disabling of connected vehicle services that could be misused by an abuser.  The bill is under consideration in the House Energy and Commerce Committee and has 22 cosponsors, including two Republicans.

III. Cryptocurrency & Blockchain

I. Federal Legislative Developments

Members of Congress introduced significant legislation related to cryptocurrencies and blockchain technologies in the first quarter, including bills to regulate stablecoins and digital assets.

  • Stablecoins:  Members of Congress introduced two significant pieces of legislation concerning stablecoins.​  On February 4, Senators Bill Hagerty (R-TN), Tim Scott (R-SC), Kirsten Gillibrand (D-NY), and Cynthia Lummis (R-WY) introduced the Guiding and Establishing National Innovation for U.S. Stablecoins (“GENIUS”) Act (S. 394), which would establish a comprehensive federal regulatory framework for stablecoins and allow states to regulate stablecoin issuers with a certain market capitalization if the state regulation is “substantially similar” to the regulatory regime under the bill.  The GENIUS Act is currently under consideration in the Senate Banking Committee.  On March 26, a bipartisan group of representatives introduced the Stablecoin Transparency and Accountability for a Better Ledger Economy (“STABLE”) Act (H.R. 2392), which would also establish a regulatory framework for dollar-backed stablecoins.  In contrast to the GENIUS Act, the STABLE Act would require state regulatory regimes for stablecoin issuers to match the federal standard created under the bill.  ​In their press release, the authors of the STABLE Act expressed their intention to work with Senate colleagues to pass unified legislation.  The bill was voted out of the House Financial Services Committee in early April.

II. Federal Regulatory Developments

In the first quarter, the White House and federal agencies took significant steps to reverse the prior Administration’s cryptocurrency and blockchain policies and integrate digital assets into the traditional financial system.

  • The White House:  In January, President Trump signed Executive Order 14178 on “Strengthening American Leadership in Digital Financial Technology,” revoking the Biden Administration’s 2022 Executive Order 14067 on “Ensuring Responsible Development of Digital Assets” and establishing the Presidential Working Group on Digital Asset Markets.  The working group is tasked with proposing a federal regulatory framework for digital assets within 180 days.  To spearhead the efforts outlined in EO 14178, President Trump appointed David Sacks as the nation’s first “Crypto Czar,” responsible for coordinating federal policies on cryptocurrencies and blockchain technology.  On March 6, President Trump issued Executive Order 14233 on the “Establishment of the Strategic Bitcoin Reserve and United States Digital Asset Stockpile.”  The EO directs the Secretary of the Treasury to use lawfully seized cryptocurrencies, such as bitcoin and other digital assets, to establish a Strategic Bitcoin Reserve and a U.S. Digital Asset Stockpile.  The EO requires the supply of bitcoin (“BTC”) in the Strategic Bitcoin Reserve to be maintained as a reserve asset and prohibits their sale. 
  • Federal Deposit Insurance Corporation:  On March 28, the Federal Deposit Insurance Corporation (“FDIC”) issued a Financial Institution Letter (FIL-7-2025) rescinding the Biden Administration’s 2022 FIL-16-2022, titled Notification of Engaging in Crypto-Related Activities.  The 2022 letter had required state-chartered nonmember banks to obtain pre-approval before engaging in crypto-related activities.  Rescinding the pre-approval requirement is expected to encourage FDIC-regulated banks to explore services such as tokenized deposits and cryptocurrency custody, potentially expanding the integration of digital assets into conventional banking.
  • Office of the Comptroller of the Currency:  On March 7, the Office of the Comptroller of the Currency (“OCC”) issued Interpretive Letter 1183, withdrawing previous guidance that had required OCC approval before banks engaged in crypto-asset activities.  The letter emphasizes the need for consistent treatment of bank activities, irrespective of the underlying technology, and signals a more accommodating stance toward blockchain innovations within the banking sector. ​
  • Securities & Exchange Commission:  On January 21, Acting SEC Chair Mark Uyeda announced the formation of a Crypto Task Force headed by Commissioner Hester Peirce. Since January, the SEC has dismissed several high-profile lawsuits against cryptocurrency companies, marking a departure from previous enforcement strategies that favored litigation over regulatory guidance. On April 10, the Senate confirmed Paul Atkins to be SEC Chair. During his Senate confirmation hearing, Atkins emphasized that establishing a rational regulatory framework for cryptocurrencies would be a top priority under his leadership.  

We will continue to update you on meaningful developments in these quarterly updates and across our blogs.

Photo of Jennifer Johnson Jennifer Johnson

Jennifer Johnson is a partner specializing in communications, media and technology matters who serves as Co-Chair of Covington’s Technology Industry Group and its global and multi-disciplinary Artificial Intelligence (AI) and Internet of Things (IoT) Groups. She represents and advises technology companies, content distributors…

Jennifer Johnson is a partner specializing in communications, media and technology matters who serves as Co-Chair of Covington’s Technology Industry Group and its global and multi-disciplinary Artificial Intelligence (AI) and Internet of Things (IoT) Groups. She represents and advises technology companies, content distributors, television companies, trade associations, and other entities on a wide range of media and technology matters. Jennifer has three decades of experience advising clients in the communications, media and technology sectors, and has held leadership roles in these practices for more than twenty years. On technology issues, she collaborates with Covington’s global, multi-disciplinary team to assist companies navigating the complex statutory and regulatory constructs surrounding this evolving area, including product counseling and technology transactions related to connected and autonomous vehicles, internet connected devices, artificial intelligence, smart ecosystems, and other IoT products and services. Jennifer serves on the Board of Editors of The Journal of Robotics, Artificial Intelligence & Law.

Jennifer assists clients in developing and pursuing strategic business and policy objectives before the Federal Communications Commission (FCC) and Congress and through transactions and other business arrangements. She regularly advises clients on FCC regulatory matters and advocates frequently before the FCC. Jennifer has extensive experience negotiating content acquisition and distribution agreements for media and technology companies, including program distribution agreements, network affiliation and other program rights agreements, and agreements providing for the aggregation and distribution of content on over-the-top app-based platforms. She also assists investment clients in structuring, evaluating, and pursuing potential investments in media and technology companies.

Photo of Nicholas Xenakis Nicholas Xenakis

Nick Xenakis draws on his Capitol Hill and legal experience to provide public policy and crisis management counsel to clients in a range of industries.

Nick assists clients in developing and implementing policy solutions to litigation and regulatory matters, including on issues involving…

Nick Xenakis draws on his Capitol Hill and legal experience to provide public policy and crisis management counsel to clients in a range of industries.

Nick assists clients in developing and implementing policy solutions to litigation and regulatory matters, including on issues involving antitrust, artificial intelligence, bankruptcy, criminal justice, financial services, immigration, intellectual property, life sciences, national security, and technology. He also represents companies and individuals in investigations before U.S. Senate and House Committees.

Nick previously served as General Counsel for the U.S. Senate Judiciary Committee, where he managed committee staff and directed legislative efforts. He also participated in key judicial and Cabinet confirmations, including of Attorneys General and Supreme Court Justices. Before his time on Capitol Hill, Nick served as an attorney with the Federal Public Defender’s Office for the Eastern District of Virginia.

Photo of Mike Nonaka Mike Nonaka

Michael Nonaka is co-chair of the Financial Services Group and advises banks, financial services providers, fintech companies, and commercial companies on a broad range of compliance, enforcement, transactional, and legislative matters.

He specializes in providing advice relating to federal and state licensing and…

Michael Nonaka is co-chair of the Financial Services Group and advises banks, financial services providers, fintech companies, and commercial companies on a broad range of compliance, enforcement, transactional, and legislative matters.

He specializes in providing advice relating to federal and state licensing and applications matters for banks and other financial institutions, the development of partnerships and platforms to provide innovative financial products and services, and a broad range of compliance areas such as anti-money laundering, financial privacy, cybersecurity, and consumer protection. He also works closely with banks and their directors and senior leadership teams on sensitive supervisory and strategic matters.

Mike plays an active role in the firm’s Fintech Initiative and works with a number of banks, lending companies, money transmitters, payments firms, technology companies, and service providers on innovative technologies such as bitcoin and other cryptocurrencies, blockchain, big data, cloud computing, same day payments, and online lending. He has assisted numerous banks and fintech companies with the launch of innovative deposit and loan products, technology services, and cryptocurrency-related products and services.

Mike has advised a number of clients on compliance with TILA, ECOA, TISA, HMDA, FCRA, EFTA, GLBA, FDCPA, CRA, BSA, USA PATRIOT Act, FTC Act, Reg. K, Reg. O, Reg. W, Reg. Y, state money transmitter laws, state licensed lender laws, state unclaimed property laws, state prepaid access laws, and other federal and state laws and regulations.

Photo of Jayne Ponder Jayne Ponder

Jayne Ponder provides strategic advice to national and multinational companies across industries on existing and emerging data privacy, cybersecurity, and artificial intelligence laws and regulations.

Jayne’s practice focuses on helping clients launch and improve products and services that involve laws governing data privacy…

Jayne Ponder provides strategic advice to national and multinational companies across industries on existing and emerging data privacy, cybersecurity, and artificial intelligence laws and regulations.

Jayne’s practice focuses on helping clients launch and improve products and services that involve laws governing data privacy, artificial intelligence, sensitive data and biometrics, marketing and online advertising, connected devices, and social media. For example, Jayne regularly advises clients on the California Consumer Privacy Act, Colorado AI Act, and the developing patchwork of U.S. state data privacy and artificial intelligence laws. She advises clients on drafting consumer notices, designing consent flows and consumer choices, drafting and negotiating commercial terms, building consumer rights processes, and undertaking data protection impact assessments. In addition, she routinely partners with clients on the development of risk-based privacy and artificial intelligence governance programs that reflect the dynamic regulatory environment and incorporate practical mitigation measures.

Jayne routinely represents clients in enforcement actions brought by the Federal Trade Commission and state attorneys general, particularly in areas related to data privacy, artificial intelligence, advertising, and cybersecurity. Additionally, she helps clients to advance advocacy in rulemaking processes led by federal and state regulators on data privacy, cybersecurity, and artificial intelligence topics.

As part of her practice, Jayne also advises companies on cybersecurity incident preparedness and response, including by drafting, revising, and testing incident response plans, conducting cybersecurity gap assessments, engaging vendors, and analyzing obligations under breach notification laws following an incident.

Jayne maintains an active pro bono practice, including assisting small and nonprofit entities with data privacy topics and elder estate planning.

Photo of John Mizerak John Mizerak

Jack Mizerak is special counsel in the firm’s Washington, DC office, focusing on environmental and product safety matters. He has experience with investigations, litigation, and regulatory issues under the Clean Air Act, the Motor Vehicle Safety Act, the Consumer Product Safety Act, the…

Jack Mizerak is special counsel in the firm’s Washington, DC office, focusing on environmental and product safety matters. He has experience with investigations, litigation, and regulatory issues under the Clean Air Act, the Motor Vehicle Safety Act, the Consumer Product Safety Act, the Clean Water Act, CERCLA, and other environmental, consumer protection, and energy regimes. Jack has particular expertise in environmental enforcement matters, including fact development, government engagement, and adoption of compliance reforms to address underlying issues and prevent recurrence of violations. He has extensive knowledge of the automotive sector, on both emissions and safety issues, including emerging regulatory trends for both zero emission powertrains and traditional internal combustion engines.

Photo of August Gweon August Gweon

August Gweon counsels national and multinational companies on data privacy, cybersecurity, antitrust, and technology policy issues, including issues related to artificial intelligence and other emerging technologies. August leverages his experiences in AI and technology policy to help clients understand complex technology developments, risks…

August Gweon counsels national and multinational companies on data privacy, cybersecurity, antitrust, and technology policy issues, including issues related to artificial intelligence and other emerging technologies. August leverages his experiences in AI and technology policy to help clients understand complex technology developments, risks, and policy trends.

August regularly provides advice to clients on privacy and competition frameworks and AI regulations, with an increasing focus on U.S. state AI legislative developments and trends related to synthetic content, automated decision-making, and generative AI. He also assists clients in assessing federal and state privacy regulations like the California Privacy Rights Act, responding to government inquiries and investigations, and engaging in public policy discussions and rulemaking processes.

Photo of Jess Gonzalez Valenzuela Jess Gonzalez Valenzuela

Jess Gonzalez Valenzuela (they/them and she/her) is an associate in the firm’s San Francisco office and is a member of the Data Privacy and Cybersecurity and Corporate Practice Groups.

Jess helps clients address complex, cutting-edge challenges to manage data privacy and cybersecurity risk…

Jess Gonzalez Valenzuela (they/them and she/her) is an associate in the firm’s San Francisco office and is a member of the Data Privacy and Cybersecurity and Corporate Practice Groups.

Jess helps clients address complex, cutting-edge challenges to manage data privacy and cybersecurity risk, including by providing regulatory compliance advice in connection with specific business practices and assisting in responding to cybersecurity incidents. Jess also maintains an active pro bono practice.

Jess is committed to DEI efforts in the legal profession, is a member of Covington’s LGBTQ+ and Latino Firm Resource Groups, and is working to develop a first generation professionals network and a disability advocacy network at Covington.

Photo of Conor Kane Conor Kane

Conor Kane advises clients on a broad range of privacy, artificial intelligence, telecommunications, and emerging technology matters. He assists clients with complying with state privacy laws, developing AI governance structures, and engaging with the Federal Communications Commission.

Before joining Covington, Conor worked in…

Conor Kane advises clients on a broad range of privacy, artificial intelligence, telecommunications, and emerging technology matters. He assists clients with complying with state privacy laws, developing AI governance structures, and engaging with the Federal Communications Commission.

Before joining Covington, Conor worked in digital advertising helping teams develop large consumer data collection and analytics platforms. He uses this experience to advise clients on matters related to digital advertising and advertising technology.

Max Larson

Max Larson is an associate in the firm’s Washington, DC office. She is a member of the Technology and Communications Regulation Practice Group.

Photo of McCall Wells McCall Wells

McCall Wells is an associate in the firm’s San Francisco office. Her practice focuses on matters related to technology transactions and technology regulation.

McCall also maintains an active pro bono practice, with a particular focus on immigration law. She also advises nonprofit companies…

McCall Wells is an associate in the firm’s San Francisco office. Her practice focuses on matters related to technology transactions and technology regulation.

McCall also maintains an active pro bono practice, with a particular focus on immigration law. She also advises nonprofit companies on corporate governance and IP concerns.

McCall earned her J.D. from the Georgetown University Law Center, where she was a Global Law Scholar and student attorney in the Communications & Technology Law Clinic. Prior to joining the firm, McCall was a fellow at a trade association focused on developing responsible regulation for digital assets. She has experience advocating on behalf of technology companies before state and federal agencies.