This quarterly update highlights key legislative, regulatory, and litigation developments in the first quarter of 2024 related to artificial intelligence (“AI”), connected and automated vehicles (“CAVs”), and data privacy and cybersecurity.  As noted below, some of these developments provide industry with the opportunity for participation and comment.

Artificial Intelligence

Federal Legislative Developments

AI remained a focal point for Congress this quarter.  Multiple bills proposing to regulate AI were  introduced, covering issues such as antitrust, transparency, and training data, and House leadership created a bipartisan task force to address AI regulation.

  • Antitrust:  Some bills introduced this quarter relate to the potential impact of AI on competition.  For example, in January, Senator Klobuchar (D-MN) introduced the Preventing Algorithmic Collusion Act of 2024 (S. 3686).  The Act would create a presumption that a defendant entered into an agreement, contract, or conspiracy in restraint of trade in violation of the antitrust laws if the defendant: (i) distributed a pricing algorithm to two or more persons with the intent that the pricing algorithm be used to set or recommend a price or (ii) used a pricing algorithm to set or recommend a price or commercial term of a product or service and the pricing algorithm was used by another person to set or recommend a price.  The Act also would require companies using algorithms to set prices to provide transparency and would prohibit the use of “nonpublic competitor data” to train any pricing algorithm.
  • Transparency:  Other bills focus on transparency requirements for AI.  For instance, in March, Representative Eshoo (D-CA-16), along with 3 bipartisan co-sponsors, introduced the Protecting Consumers from Deceptive AI Act (H.R. 7766).  The Act would direct the National Institute of Standards and Technology (“NIST”) to facilitate the development of standards for identifying and labeling AI-generated content, including through technical measures such as provenance metadata, watermarking, and digital fingerprinting.  The Act also would require generative AI developers to include machine-readable disclosures within audio or visual content generated by their AI applications.  Providers of covered online platforms would have to implement the disclosures to label AI-generated content.
  • Consent for Training Data:  Legislative proposals also focus on consent for use of training data.  For example, Senators Welch (D-VT) and Lujan (D-NM) introduced the Artificial Intelligence Consumer Opt-in, Notification, Standards, and Ethical Norms for Training Act or the “AI CONSENT Act” (S. 3975).  The Act would require entities to receive an individual’s express informed consent before using “covered data” (defined broadly) to train an AI system.  
  • AI Task Force:  This quarter, House Speaker Mike Johnson (R-LA-4) and Minority Leader Hakeem Jeffries (D-NY-8) announced the establishment of a bipartisan Task Force on AI.  Speaker Johnson and Leader Jeffries have each appointed 12 members to the Task Force.  Among other things, the Task Force will produce a comprehensive report that will include: (i) guiding principles; (ii) forward-looking recommendations; and (iii) bipartisan policy proposals.

Federal Regulatory Developments

  • National Science Foundation (“NSF”):  The NSF announced the launch of the National AI Research Resource (“NAIRR”), a two-year pilot program that will support AI researchers and aid innovation.  NSF will partner with ten other federal agencies as well as 25 private sector, nonprofit, and philanthropic organizations to power AI research and inform the design of the full NAIRR ecosystem over time.  Specifically, the NAIRR pilot will support research to advance safe, secure, and trustworthy AI, as well as the application of AI to challenges in healthcare and environmental and infrastructure sustainability.  The NAIRR launch meets a goal outlined in the White House’s October 2023 Executive Order on the Safe, Secure, and Trustworthy Development and Use of AI (“EO”), which directs NSF to launch a pilot for the NAIRR.
  • Department of Commerce:  The Department of Commerce published a proposed rule to require providers and foreign resellers of U.S. Infrastructure-as-a-Service products to, among other things, notify the Department of Commerce when a foreign person transacts with that provider or reseller to train a large AI model with potential capabilities that could be used in malicious cyber-enabled activity.  The AI provisions of the proposed rule stem from mandates in the EO on AI.  Comments are due by April 29, 2024.
  • Federal Communications Commission (“FCC”):  The FCC released a declaratory ruling stating that under the Telephone Consumer Protection Act (“TCPA”), telemarketing calls using an artificial or prerecorded voice simulated or generated through AI technology can be made only with the prior express written consent of the called party unless an exemption applies.  The declaration followed the submission of reply comments supporting the change by the Attorneys General of 25 states and the District of Columbia.  Further, the FCC announced that it will relaunch the Consumer Advisory Committee (“CAC”) to focus on emerging AI technologies and consumer privacy issues.
  • Federal Trade Commission (“FTC”):  The FTC issued a supplemental Notice of Proposed Rulemaking (“NPRM”) that would amend the Rule on Impersonation of Government and Business (“Impersonation Rule”) to prohibit the impersonation of individuals using AI and extend liability for violations of the Impersonation Rule.  Comments are due by April 30, 2024.  Additionally, the FTC published a blog post warning AI companies that it may be unfair and deceptive to quietly change their terms of service to adopt more permissive data practices, such as using consumers’ data for AI training, without adequate notice to consumers.
  • White House Office of Management and Budget (“OMB”):  OMB issued its first government-wide policy memorandum on deploying AI in the federal government and managing its risks.  The memorandum establishes requirements and guidance for federal agencies that aim to strengthen AI governance, advance responsible AI innovation, and manage AI risks, especially those risks that affect the rights and safety of the public.  For example, the memorandum requires agencies to implement minimum governance procedures for certain rights-impacting and safety-impacting AI use cases.
  • U.S. Patent and Trademark Office (“USPTO”):  The USPTO published a guidance declaring that while AI systems and other “non-natural persons” cannot be listed as inventors in patent applications, the use of an AI system by a natural person does not preclude a natural person from qualifying as an inventor.  Further, the person using the AI must have contributed significantly to the invention; simply overseeing an AI system’s creation is not sufficient.  Those seeking patents must disclose if AI was used during the invention process.  In conjunction with the guidance, the USTPO issued examples to illustrate the application of the guidance in specific situations.  Comments on the guidance and examples are due by May 13, 2024.

AI Litigation Developments

Plaintiffs continue to test theories in lawsuits against companies developing AI models, with a number of suits focused on copyright infringement and related claims.  The defendants in the copyright cases have responded by arguing, among other things, that the plaintiffs failed to plead facts establishing that models were trained on materials covered by copyright registrations, failed to support claims that the model is both an infringing “copy” and “derivative” of each registered work on which it was allegedly trained, and failed to identify copyright management information (“CMI”) that the defendants allegedly altered or removed.  2024 Q1 litigation developments include, for example:

  • New Copyright Complaints
    • On March 8, a group of book authors brought a direct copyright infringement claim against Nvidia, alleging that Nvidia copied and used their copyright-protected works to train their NeMo Megatron series of LLMs.  Nazemian et al. v.  Nvidia Corp. 24-cv-1454 (N.D. Cal.).  The same day, the authors also brought a copyright infringement suit against MosaicML for direct infringement and Databricks, Inc. for vicarious infringement concerning the training of Mosaic’s MPT LLM model series, including MPT-7B and MPT-30B.  O’Nan et al. v. Databricks Inc. et al., 3:24-cv-01451 (N.D. Cal).
    • On February 28, two suits were filed by news media organizations against OpenAI, alleging that OpenAI violated the Digital Millennium Copyright Act by training the ChatGPT LLM with copies of their works from which content management information had been removed.  Raw Story Media, AlterNet Media v. OpenAI, et al., 24-cv-1514 (S.D.N.Y.); The Intercept Media, Inc. v. OpenAI, Inc. et al., 1:24-CV-01515 (S.D.N.Y.).  The Intercept also named Microsoft as a defendant.
    • On January 5, a class action complaint was filed by journalists and authors of nonfiction works against Microsoft and OpenAI alleging that the companies unlawfully reproduced their copyrighted works for the purpose of training their LLMs and ChatGPT.  Basbanes v. Microsoft, 1:24-cv-84 (S.D.N.Y.).  The Basbanes suit has since been consolidated with Authors Guild, et al., v. Open AI Inc., et al., 23-cv-08292 (S.D.N.Y.) and Alter, et al., v. Open AI Inc., et al., 23-cv-10211 (S.D.N.Y.). 
  • Responses in New York Times Case:  On February 26, OpenAI filed a motion to dismiss in The New York Times Company v. Microsoft et. al. 1:23-cv-11195 (S.D.N.Y), arguing, among other things, that NYT failed to allege that OpenAI had actual knowledge of specific acts of infringement for the purposes of contributory copyright liability and that NYT failed to identify the CMI that OpenAI allegedly removed.  On March 3, Microsoft filed a partial motion to dismiss, arguing, among other things, that NYT failed to state a claim against Microsoft for contributory infringement for failure to allege an underlying direct infringement by end users and that NYT cannot allege Microsoft’s actual knowledge of (or willful blindness to) any act of direct infringement.  On March 11 and March 18, NYT responded to both motions to dismiss, making procedural arguments and arguing, among other things, that  OpenAI had knowledge of contributory infringement because NYT had actually informed OpenAI of this alleged infringement.  Though fair use arguments are not being litigated at this stage, both parties have discussed fair use case law in their briefing.
  • Copyright Management Information (“CMI”) Dismissals in GitHub CaseOn January 3, the court in Doe v. GitHub, 22-cv-6823 (N.D. Cal.) issued its second decision on a motion to dismiss, partially granting and denying the motion for six of the plaintiffs’ eight claims.  The court found that some of the plaintiffs sufficiently alleged Article III standing to seek damages based on amended allegations that included examples of their code that were output by the Copilot coding tool.  The court also found that certain state law claims were preempted by the Copyright Act and dismissed them with prejudice.  The court also dismissed the plaintiffs’ CMI claims under the Digital Millennium Copyright Act with leave to amend, holding that the claims at issue lie only when CMI is removed or altered from an identical copy of a copyrighted work, and the amended complaint only identified examples of outputs that were alleged to be modifications of copyrighted code, and not identical copies.  On January 25, the plaintiffs filed a second amended complaint, re-alleging the CMI claims and bringing two breach of contract claims for open-source license violations and selling licensed materials in violation of Github’s policies.  On February 28, defendants Microsoft and Github moved to dismiss again, arguing that the plaintiffs still failed to plead that CMI was removed from identical copies of the plaintiffs’ works.
  • Response to Amended Complaint in Google Case:  On January 5, the plaintiffs in Leovy v. Google LLC, 3:23-cv-03440 (N.D. Cal) amended their complaint to name the plaintiffs, allege different causes of action, and plead additional allegations concerning Google’s alleged violations of the plaintiffs’ rights under property, privacy, and copyright law, among other things.  On February 9, the defendant moved to dismiss the plaintiffs’ amended complaint with prejudice.  With respect to the plaintiffs’ web scraping claims, the defendant argued, “outside copyright law (including its protection for fair use), there is no general right to control publicly available information.”  The defendant argued that the plaintiffs’ direct copyright infringement claims based on generative AI output should be dismissed because the plaintiff pled that “Bard’s output necessarily infringes the copyrights in all the works Bard trained on” without providing any examples of a “substantially similar” infringing output.  The motion did not argue for dismissal of the direct copyright infringement claim based on the training process.  With respect to the plaintiffs’ negligence claims, the defendant argued that the plaintiffs failed to adequately allege that it owed the plaintiffs a duty of care and that the economic loss rule otherwise barred a negligence claim.
  • Dismissals and Consolidation in N.D. Cal Litigation:  On February 12, in a consolidated opinion, the court granted the defendants’ motions to dismiss the claims in Tremblay et al. v. OpenAI, Inc. et al., 3:23-cv-03223 (N.D. Cal.) and those in the related case of Silverman, et al v. OpenAI, Inc., et al., 23-cv-03416 (N.D. Cal.) case for vicarious infringement, violation of the Digital Millennium Copyright Act, and negligence, with leave to amend.  The court also dismissed the plaintiffs’ unjust enrichment claim with prejudice, but allowed the unfair competition claim to proceed.  The case was subsequently consolidated on February 16 with the Silverman case and Chabon v. OpenAI, et al., 23-cv-04625 (N.D. Cal).  On March 13, the plaintiffs filed a first consolidated amended complaint (under the new caption, “In Re ChatGPT Litigation”), narrowing to two counts of direct copyright infringement and violation of the California Unfair Competition Act. 
  • Right of Publicity Complaint:  On January 25, representatives of comedian George Carlin’s estate filed suit in Main Sequence, Ltd. et. al. v. Dudesy, LLC, 24-cv-711 (C.D. Cal.), alleging that the defendants, by training an AI model to mimic Carlin’s stand-up performances and by publishing the allegedly AI-created “George Carlin Special,” have unlawfully used Carlin’s name, image and likeness without consent, in addition to infringing copyrighted Carlin materials.  There is some uncertainty expressed in the complaint as to whether the “George Carlin Special” was produced using a generative AI model or involved a human-written script paired with assistive tools such as an AI voice generator.  The plaintiffs allege that in either case, Carlin’s image and likeness was unlawfully used and his reputation harmed. 

Connected & Automated Vehicles

  • Autonomous Vehicle Accessibility ActOn January 30, Representatives Greg Stanton (D-AZ) and Brian Mast (R-FL), members of the House Transportation and Infrastructure Committee, introduced the bipartisan Autonomous Vehicle Accessibility Act (H.R. 7126).  The Act is intended to help people with disabilities better access the mobility and independence benefits of ride-hail CAVs, such as by:  (1) prohibiting states from issuing motor vehicle operator’s licenses in a manner that prevents a qualified individual with an ADA disability from riding as a passenger in a vehicle equipped with an automated driving system that is operating in fully autonomous mode; and (2) requiring the Secretary of Transportation to conduct an accessible infrastructure study to determine the best practices for public transportation infrastructure to be modified to improve the ability of Americans with disabilities to find, access, and use ride-hail autonomous vehicles.  The bill was referred to the Subcommittee on Highways and Transit on February 12, 2024.
  • Focus on Data Privacy Practices of Vehicle Manufacturers:  On February 27, Senator Markey (D-MA) sent a letter to the FTC asking the FTC to investigate the data privacy practices of car manufacturers.  Senator Markey noted that the responses automakers provided to his late 2023 inquiry “gave [him] little comfort” and that the companies’ “ambiguity and evasiveness calls out for the investigatory powers of the FTC.”  The letter “urge[s] the [FTC] to use the full force of its authorities to investigate the automakers’ privacy practices and take all necessary enforcement actions to ensure that consumer privacy is protected.”
  • Continued Attention on Connectivity and Domestic ViolenceAs we reported in our last update, the FCC has taken steps to increase its understanding of certain safety issues implicated by connected vehicles with respect to the potential for wireless connectivity and location data to negatively impact partners in abusive relationships.  Continuing this focus, on February 28, the FCC issued a press release reporting that Chairwoman Rosenworcel circulated a Notice of Proposed Rulemaking regarding how the agency can leverage existing law to ensure that car manufacturers and wireless service providers “understand the full impact of the connectivity tools in new vehicles and how these applications can be used to stalk, harass, and intimidate.”  If adopted, the NPRM “would seek comment on the types and frequency of use of connected car services that are available in the marketplace today.”  Among other things, the NPRM would ask if changes to the FCC’s rules implementing the Safe Connections Act are needed to address the impact of connected car services on domestic abuse survivors.  It also would seek comment on what steps connected car services can proactively take to protect survivors from the misuse of such services.

Data Privacy & Cybersecurity

Privacy

With respect to privacy, a number of states kicked off the new year with new privacy laws and the FTC continued to bring enforcement actions related to companies’ privacy practices.

  • New State Privacy Laws:  Legislatures in New Jersey, New Hampshire, and Kentucky passed new data privacy laws that largely resemble the approaches taken under existing privacy frameworks in the U.S.  Maryland’s legislature has also passed a comprehensive privacy law, although both chambers are working to reconcile differences.  Additionally, Nebraska enacted a genetic privacy law regulating direct-to-consumer (“DTC”) genetic testing companies.  The law is one of a flurry of bills regarding DTC genetic testing that have been introduced in several states since the beginning of 2024, following the enactment of several DTC genetic testing laws in 2023.
  • FTC Consent Orders:  The FTC recently announced proposed consent orders with  Outlogic and InMarket Media related to the use of precise geolocation data.  Both companies collect location data using software development kits (“SDKs”) installed in first and third party apps, among other data sources.  According to the FTC’s complaints, Outlogic sold this data to third parties (including in a manner that revealed consumer’s visits to sensitive locations) without obtaining adequate consent, and InMarket used this data to facilitate targeted advertising without notifying consumers that their location data will be used for targeted advertising.  In both cases, the FTC alleged that these acts and practices constituted unfair and/or deceptive acts or practices under Section 5 of the FTC Act. 

Cybersecurity

Federal cybersecurity regulators have had a busy start to 2024 and set in motion a number of new proposed rules and cybersecurity standards that, if implemented, will redefine the landscape for federal cybersecurity regulations in the years ahead.

  • Critical Infrastructure Broadly Defined:  The U.S. Cybersecurity and Infrastructure Security Agency (“CISA”) published a proposed rule to implement the cyber incident reporting requirements for critical infrastructure entities from the Cyber Incident Reporting for Critical Infrastructure Act of 2022 (“CIRCIA”).  Notably, the proposed rule broadly defines critical infrastructure entities (pursuant to Presidential Policy Directive 21) across the 16 critical infrastructure sectors.  In total, CISA estimates that over 300,000 entities would be covered by the rule.  CIRCIA has two cyber incident reporting requirements for covered critical infrastructure entities: a 24-hour requirement to report ransomware payments and a 72-hour requirement to report covered cyber incidents to CISA.  Under CIRCIA, the final rule must be published by September 2025.
  • Cybersecurity Framework 2.0:  The U.S. National Institute of Standards and Technology (“NIST”) published version 2.0 of its Cybersecurity Framework.  The new version incorporates some significant updates to the Framework including: expanded application (i.e., broad application regardless of cybersecurity program maturity); a new “govern” function (i.e., whether an organization’s cybersecurity risk management strategy, expectations, and policy are established, communicated, and monitored); increased focus on cybersecurity supply chain risk management (e.g., whether an organization performs due diligence on potential suppliers and monitors the relationship through the technology or service life cycle); and new reference tools.
  • Federal Cybersecurity Enforcement Action: The U.S. Department of Health and Human Services Office of Civil Rights announced that it had settled a cybersecurity investigation with Montefiore Medical Center, a non-profit hospital system based in New York City, for $4.75 million. 

We will continue to update you on meaningful developments in these quarterly updates and across our blogs.

Photo of Jennifer Johnson Jennifer Johnson

Jennifer Johnson is a partner specializing in communications, media and technology matters who serves as Co-Chair of Covington’s Technology Industry Group and its global and multi-disciplinary Artificial Intelligence (AI) and Internet of Things (IoT) Groups. She represents and advises technology companies, content distributors…

Jennifer Johnson is a partner specializing in communications, media and technology matters who serves as Co-Chair of Covington’s Technology Industry Group and its global and multi-disciplinary Artificial Intelligence (AI) and Internet of Things (IoT) Groups. She represents and advises technology companies, content distributors, television companies, trade associations, and other entities on a wide range of media and technology matters. Jennifer has almost three decades of experience advising clients in the communications, media and technology sectors, and has held leadership roles in these practices for almost twenty years. On technology issues, she collaborates with Covington’s global, multi-disciplinary team to assist companies navigating the complex statutory and regulatory constructs surrounding this evolving area, including product counseling and technology transactions related to connected and autonomous vehicles, internet connected devices, artificial intelligence, smart ecosystems, and other IoT products and services. Jennifer serves on the Board of Editors of The Journal of Robotics, Artificial Intelligence & Law.

Jennifer assists clients in developing and pursuing strategic business and policy objectives before the Federal Communications Commission (FCC) and Congress and through transactions and other business arrangements. She regularly advises clients on FCC regulatory matters and advocates frequently before the FCC. Jennifer has extensive experience negotiating content acquisition and distribution agreements for media and technology companies, including program distribution agreements, network affiliation and other program rights agreements, and agreements providing for the aggregation and distribution of content on over-the-top app-based platforms. She also assists investment clients in structuring, evaluating, and pursuing potential investments in media and technology companies.

Photo of Nicholas Xenakis Nicholas Xenakis

Nick Xenakis draws on his Capitol Hill experience to provide regulatory and legislative advice to clients in a range of industries, including technology. He has particular expertise in matters involving the Judiciary Committees, such as intellectual property, antitrust, national security, immigration, and criminal…

Nick Xenakis draws on his Capitol Hill experience to provide regulatory and legislative advice to clients in a range of industries, including technology. He has particular expertise in matters involving the Judiciary Committees, such as intellectual property, antitrust, national security, immigration, and criminal justice.

Nick joined the firm’s Public Policy practice after serving most recently as Chief Counsel for Senator Dianne Feinstein (C-DA) and Staff Director of the Senate Judiciary Committee’s Human Rights and the Law Subcommittee, where he was responsible for managing the subcommittee and Senator Feinstein’s Judiciary staff. He also advised the Senator on all nominations, legislation, and oversight matters before the committee.

Previously, Nick was the General Counsel for the Senate Judiciary Committee, where he managed committee staff and directed legislative and policy efforts on all issues in the Committee’s jurisdiction. He also participated in key judicial and Cabinet confirmations, including of an Attorney General and two Supreme Court Justices. Nick was also responsible for managing a broad range of committee equities in larger legislation, including appropriations, COVID-relief packages, and the National Defense Authorization Act.

Before his time on Capitol Hill, Nick served as an attorney with the Federal Public Defender’s Office for the Eastern District of Virginia. There he represented indigent clients charged with misdemeanor, felony, and capital offenses in federal court throughout all stages of litigation, including trial and appeal. He also coordinated district-wide habeas litigation following the Supreme Court’s decision in Johnson v. United States (invalidating the residual clause of the Armed Career Criminal Act).

Photo of Phillip Hill Phillip Hill

Phillip Hill focuses on complex copyright matters with an emphasis on music, film/TV, video games, sports, theatre, and technology.

Phillip’s global practice includes all aspects of copyright and the DMCA, as well as trademark and right of publicity law, and encompasses the full…

Phillip Hill focuses on complex copyright matters with an emphasis on music, film/TV, video games, sports, theatre, and technology.

Phillip’s global practice includes all aspects of copyright and the DMCA, as well as trademark and right of publicity law, and encompasses the full spectrum of litigation, transactions, counseling, legislation, and regulation. He regularly represents clients in federal and state court, as well as before the U.S. Copyright Royalty Board, Copyright Office, Patent & Trademark Office, and Trademark Trial & Appeal Board.

Through his work at the firm and prior industry and in-house experience, Phillip has developed a deep understanding of his clients’ industries. He also regularly advises on cutting-edge topics like generative artificial intelligence, the metaverse, and NFTs.

In addition to his full-time legal practice, Phillip serves as Chair of the ABA Music and Performing Arts Committee, frequently speaks on emerging trends, is active in educational efforts, and publishes regularly.

Photo of Jayne Ponder Jayne Ponder

Jayne Ponder is an associate in the firm’s Washington, DC office and a member of the Data Privacy and Cybersecurity Practice Group. Jayne’s practice focuses on a broad range of privacy, data security, and technology issues. She provides ongoing privacy and data protection…

Jayne Ponder is an associate in the firm’s Washington, DC office and a member of the Data Privacy and Cybersecurity Practice Group. Jayne’s practice focuses on a broad range of privacy, data security, and technology issues. She provides ongoing privacy and data protection counsel to companies, including on topics related to privacy policies and data practices, the California Consumer Privacy Act, and cyber and data security incident response and preparedness.

Photo of Shayan Karbassi Shayan Karbassi

Shayan Karbassi is an associate in the firm’s Washington, DC office. He is a member of the firm’s Data Privacy and Cybersecurity and White Collar and Investigations Practice Groups. Shayan advises clients on a range of cybersecurity and national security matters. He also…

Shayan Karbassi is an associate in the firm’s Washington, DC office. He is a member of the firm’s Data Privacy and Cybersecurity and White Collar and Investigations Practice Groups. Shayan advises clients on a range of cybersecurity and national security matters. He also maintains an active pro bono practice.

Photo of Olivia Dworkin Olivia Dworkin

Olivia Dworkin minimizes regulatory and litigation risks for clients in the medical device, pharmaceutical, biotechnology, eCommerce, and digital health industries through strategic advice on complex FDA issues, helping to bring innovative products to market while ensuring regulatory compliance. With a focus on cutting-edge…

Olivia Dworkin minimizes regulatory and litigation risks for clients in the medical device, pharmaceutical, biotechnology, eCommerce, and digital health industries through strategic advice on complex FDA issues, helping to bring innovative products to market while ensuring regulatory compliance. With a focus on cutting-edge medical technologies and digital health products and services, Olivia regularly helps new and established companies navigate a variety of state and federal regulatory, legislative, and compliance matters throughout the total product lifecycle. She has experience counseling clients on the development, FDA regulatory classification, and commercialization of digital health tools, including clinical decision support software, mobile medical applications, general wellness products, medical device data systems, administrative support software, and products that incorporate artificial intelligence, machine learning, and other emerging technologies.

Olivia also assists clients in advocating for legislative and regulatory policies that will support innovation and the safe deployment of digital health tools, including by drafting comments on proposed legislation, frameworks, whitepapers, and guidance documents. Olivia keeps close to the evolving regulatory landscape and is a frequent contributor to Covington’s Digital Health blog. Her work also has been featured in the Journal of Robotics, Artificial Intelligence & Law, Law360, and the Michigan Journal of Law and Mobility.

Photo of Jorge Ortiz Jorge Ortiz

Jorge Ortiz is an associate in the firm’s Washington, DC office and a member of the Data Privacy and Cybersecurity and the Technology and Communications Regulation Practice Groups.

Jorge advises clients on a broad range of privacy and cybersecurity issues, including topics related…

Jorge Ortiz is an associate in the firm’s Washington, DC office and a member of the Data Privacy and Cybersecurity and the Technology and Communications Regulation Practice Groups.

Jorge advises clients on a broad range of privacy and cybersecurity issues, including topics related to privacy policies and compliance obligations under U.S. state privacy regulations like the California Consumer Privacy Act.

Photo of Jemie Fofanah Jemie Fofanah

Jemie Fofanah is an associate in the firm’s Washington, DC office. She is a member of the Privacy and Cybersecurity Practice Group and the Technology and Communication Regulatory Practice Group. She also maintains an active pro bono practice with a focus on criminal…

Jemie Fofanah is an associate in the firm’s Washington, DC office. She is a member of the Privacy and Cybersecurity Practice Group and the Technology and Communication Regulatory Practice Group. She also maintains an active pro bono practice with a focus on criminal defense and family law.

Photo of Andrew Longhi Andrew Longhi

Andrew Longhi is an associate in the firm’s Washington, DC office and a member of the Data Privacy and Cybersecurity and Technology and Communications Regulation Practice Groups.

Andrew advises clients on a broad range of privacy and cybersecurity issues, including compliance obligations, commercial…

Andrew Longhi is an associate in the firm’s Washington, DC office and a member of the Data Privacy and Cybersecurity and Technology and Communications Regulation Practice Groups.

Andrew advises clients on a broad range of privacy and cybersecurity issues, including compliance obligations, commercial transactions involving personal information and cybersecurity risk, and responses to regulatory inquiries.

Andrew is Admitted to the Bar under DC App. R. 46-A (Emergency Examination Waiver); Practice Supervised by DC Bar members.

Photo of Lauren Gerber Lauren Gerber

Lauren Gerber is an experienced litigator focused on product liability and mass tort defense and complex civil litigation across technology and pharmaceutical industries.
Lauren has represented clients at all stages of litigation, including fact and expert discovery, dispositive motions, as well as pre-trial…

Lauren Gerber is an experienced litigator focused on product liability and mass tort defense and complex civil litigation across technology and pharmaceutical industries.
Lauren has represented clients at all stages of litigation, including fact and expert discovery, dispositive motions, as well as pre-trial Daubert motions and motions in limine. She also has experience representing clients preparing for trial in patent, insurance recovery, and employment discrimination cases in federal and state court.

Lauren has tried multiple cases to verdict, including the pro bono representation of a defendant charged with first degree murder. Lauren has also represented dozens of children and caregivers in D.C. Superior Court at trial and in evidentiary hearings during a six-month full-time rotation at the Children’s Law Center, DC’s largest non-profit legal services provider.

Photo of Vanessa Lauber Vanessa Lauber

Vanessa Lauber is an associate in the firm’s New York office and a member of the Data Privacy and Cybersecurity Practice Group, counseling clients on data privacy and emerging technologies, including artificial intelligence.

Vanessa’s practice includes partnering with clients on compliance with federal…

Vanessa Lauber is an associate in the firm’s New York office and a member of the Data Privacy and Cybersecurity Practice Group, counseling clients on data privacy and emerging technologies, including artificial intelligence.

Vanessa’s practice includes partnering with clients on compliance with federal and state privacy laws and FTC and consumer protection laws and guidance. Additionally, Vanessa routinely counsels clients on drafting and developing privacy notices and policies. Vanessa also advises clients on trends in artificial intelligence regulations and helps design governance programs for the development and deployment of artificial intelligence technologies across a number of industries.

Photo of Zoe Kaiser Zoe Kaiser

Zoe Kaiser is an associate in the firm’s San Francisco office, where she is a member of the Litigation and Investigations and Copyright and Trademark Litigation Practice Groups. She advises on cutting-edge topics such as generative artificial intelligence.

Zoe maintains an active pro…

Zoe Kaiser is an associate in the firm’s San Francisco office, where she is a member of the Litigation and Investigations and Copyright and Trademark Litigation Practice Groups. She advises on cutting-edge topics such as generative artificial intelligence.

Zoe maintains an active pro bono practice, focusing on media freedom.

Photo of Madeleine Dolan Madeleine Dolan

Madeleine (Maddie) Dolan is an associate in the Washington, DC office. Her practice focuses on product liability and mass torts litigation, commercial litigation, and pro bono criminal defense. Maddie’s primary experience is in discovery and trial work.

Prior to joining Covington, Maddie served…

Madeleine (Maddie) Dolan is an associate in the Washington, DC office. Her practice focuses on product liability and mass torts litigation, commercial litigation, and pro bono criminal defense. Maddie’s primary experience is in discovery and trial work.

Prior to joining Covington, Maddie served as a law clerk to U.S. District Judge Mark R. Hornak of the Western District of Pennsylvania in Pittsburgh. She also previously worked as a consultant and strategic communications director and managed communications and marketing campaigns for federal government agencies.