This quarterly update highlights key legislative, regulatory, and litigation developments in the fourth quarter of 2023 and early January 2024 related to technology issues. These included developments related to artificial intelligence (“AI”), connected and automated vehicles (“CAVs”), data privacy, and cybersecurity. As noted below, some of these developments provide companies with the opportunity for participation and comment.
I. Artificial Intelligence
Federal Executive Developments on AI
The Executive Branch and U.S. federal agencies had an active quarter, which included the White House’s October 2023 release of the Executive Order (“EO”) on Safe, Secure, and Trustworthy Artificial Intelligence. The EO declares a host of new actions for federal agencies designed to set standards for AI safety and security; protect Americans’ privacy; advance equity and civil rights; protect vulnerable groups such as consumers, patients, and students; support workers; promote innovation and competition; advance American leadership abroad; and effectively regulate the use of AI in government. The EO builds on the White House’s prior work surrounding the development of responsible AI. Concerning privacy, the EO sets forth a number of requirements for the use of personal data for AI systems, including the prioritization of federal support for privacy-preserving techniques and strengthening privacy-preserving research and technologies (e.g., cryptographic tools). Regarding equity and civil rights, the EO calls for clear guidance to landlords, Federal benefits programs, and Federal contractors to keep AI systems from being used to exacerbate discrimination. The EO also sets out requirements for developers of AI systems, including requiring companies developing any foundation model “that poses a serious risk to national security, national economic security, or national public health and safety” to notify the federal government when training the model and provide results of all red-team safety tests to the government.
Federal Legislative Activity on AI
Congress continued to evaluate AI legislation and proposed a number of AI bills, though none of these bills are expected to progress in the immediate future. For example, members of Congress continued to hold meetings on AI and introduced bills related to deepfakes, AI research, and transparency for foundational models.
- Deepfakes and Inauthentic Content: In October 2023, a group of bipartisan senators released a discussion draft of the NO FAKES Act, which would prohibit persons or companies from producing an unauthorized digital replica of an individual in a performance or hosting unauthorized digital replicas if the platform has knowledge that the replica was not authorized by the individual depicted.
- Research: In November 2023, Senator Thune (R-SD), along with five bipartisan co-sponsors, introduced the Artificial Intelligence Research, Innovation, and Accountability Act (S. 3312), which would require covered internet platforms that operate generative AI systems to provide their users with clear and conspicuous notice that the covered internet platform uses generative AI.
- Transparency for Foundational Models: In December 2023, Representative Beyer (D-VA-8) introduced the AI Foundation Model Act (H.R. 6881), which would direct the Federal Trade Commission (“FTC”) to establish transparency standards for foundation model deployers in consultation with other agencies. The standards would require companies to provide consumers and the FTC with information on a model’s training data and mechanisms, as well as information regarding whether user data is collected in inference.
- Bipartisan Senate Forums: Senator Schumer’s (D-NY) AI Insight Forums, which are a part of his SAFE Innovation Framework, continued to take place this quarter. As part of these forums, bipartisan groups of senators met multiple times to learn more about key issues in AI policy, including privacy and liability, long-term risks of AI, and national security.
Federal Regulatory Updates
- Federal Communications Commission (“FCC”): The FCC adopted a Notice of Inquiry (“NOI”) to better understand how AI impacts illegal and unwanted robocalls and texts. The NOI sought to understand AI benefits and risks to allow the FCC to better combat harms, utilize AI’s benefits, and protect consumers. In addition, the FCC announced that it will re-establish the Communications Security, Reliability, and Interoperability Council (“CSRIR”), which will focus on how AI and machine learning can enhance the security, reliability, and integrity of communications networks. This will be the FCC’s ninth charter of CSRIC, with an expected first meeting in June 2024.
- FTC: The FTC announced an exploratory challenge to understand the harms associated with AI-enabled voice cloning, which raised concerns about ways that voice cloning technology could be used to harm consumers. Further, the FTC also hosted a virtual roundtable on the Creative Economy and Generative AI, during which speakers emphasized their view that the FTC must treat generative AI like any previous technological development that could harm consumers and competition.
- National Institute of Standards & Technology (“NIST”): NIST released a Request for Information (“RFI”) seeking information to assist in carrying out several of its responsibilities under the EO, including a request for public input on guidelines for AI safety and security, AI content, and responsible global standards for AI development. Public comments are due by February 2nd.
- U.S. Copyright Office (“USCO”): The USCO received more than 10,300 initial and reply comments in response to its NOI on AI and Copyright, which sought input on a range of legal and technical topics, including in regard to training, transparency and record keeping, copyrightability, infringement, fair use, and labeling. Additionally, the USCO issued a Notice of Proposed Rulemaking (“NPRM”) that set forth proposals for renewed and new exemptions to the Digital Millennium Copyright Act’s (“DMCA”) anti-circumvention provisions. One proposed exemption would permit circumvention of technological measures that control access to “copyrighted generative AI models solely for the purpose of researching biases within the models,” including the sharing of research, techniques, and methodologies that expose and address such biases.
- Cybersecurity and Infrastructure Security Agency (“CISA”): CISA announced that it was jointly releasing Guidelines for Secure AI System Development alongside with the United Kingdom’s National Cyber Security Centre. The Guidelines are aimed at providers of AI systems and are focused on four main areas: (1) secure design; (2) secure development; (3) secure deployment; and (4) secure operation and maintenance. The Guidelines aim to “help providers build AI systems that function as intended, are available when needed, and work without revealing sensitive data to unauthorized parties.”
AI Litigation Activities
Plaintiffs have brought and tested various theories in lawsuits against companies developing AI models and tools, including copyright infringement, violations of the DMCA, negligence, privacy harms, unjust enrichment, breach of contract, trademark infringement, right of publicity violations, and defamation, among others. A number of high-profile lawsuits have focused on copyright infringement, generally alleging that: (a) the defendants developed or used generative AI models, including large language models (“LLMs”), that were trained on copyrighted works without the copyright owners’ consent; and (b) the model and/or its outputs infringe. Q4 litigation developments include, for example:
- Copyright Dismissals: On November 20th, the district court in Kadrey v. Meta Platforms Inc., 3:23-cv-03417 (N.D. Cal.), dismissed without prejudice most of the claims brought by plaintiff Sarah Silverman (who filed a separate case against Microsoft and OpenAI) and other authors alleging copyright infringement based on use of their works to develop and deploy LLMs. The court found insufficient allegations that the LLMs themselves were directly infringing derivative works and that the plaintiffs had not alleged sufficient similarity between the contents of any LLM output and their copyrighted works. Additionally, on October 30th, the district court in Andersen v. Stability AI Ltd., No. 3:23-cv-00201 (N.D. Cal.) dismissed without prejudice all of the plaintiffs’ claims except for direct copyright infringement of training materials.
- Amended Complaints: The Kadrey plaintiffs filed an amended complaint on December 12th, which narrowed their claims to alleged copying of the plaintiffs’ books for use as training material with supplemental supporting allegations. The Andersen plaintiffs filed an amended complaint on November 29th, which included additional examples of AI-generated outputs and also included a trade dress infringement theory based on plaintiffs’ alleged artistic styles.
- New Complaints by Authors & Publishers: On December 27th, the complaint in New York Times v. Microsoft, 1:23-cv-11195 (S.D.N.Y.) was filed, alleging that the defendants unlawfully used millions of New York Times articles to train LLMs. The complaint includes several examples of ChatGPT and Bing Chat allegedly generating “near-verbatim copies of significant portions” of copyrighted articles, and asserts that using such materials to train LLMs does not serve a transformative purpose. Microsoft and OpenAI face other similar lawsuits, including (1) Sancton v. Open AI, Inc., 1:23-cv-10211 (S.D.N.Y.); (2) Tremblay v. OpenAI, Inc., 3:23-cv-3223 (N.D. Cal.); and (3) Silverman v. OpenAI, Inc., 3:23-cv-3416 (N.D. Cal.). The plaintiffs in these cases are various authors who allege, among other things, that the defendants reproduced their copyrighted works to train LLMs without authorization. Similarly, on October 17th, a group of authors including former Arkansas Governor Mike Huckabee brought suit against Meta, Microsoft, and Bloomberg in Huckabee v. Meta Platforms, Inc., No. 1:23-cv-09152 (S.D.N.Y.).
- October 2023 Complaint by Music Publishers:A group of eight music publishers filed suit on October 18th in Concord Music Group, Inc. v. Anthropic PBC, No 3:23-cv-01092 (M.D. Tenn.), alleging that their copyrighted lyrics were directly and vicariously infringed by Anthropic’s Claude AI tool. Anthropic has since moved to dismiss or transfer the suit for lack of personal jurisdiction and venue.
- December 2023 Antitrust Complaint: The complaint in Helena World Chronicle, LLC v. Google et. al., 1:23-cv-03677 (D.D.C.), filed on December 11, asserts claims under federal antitrust law. The plaintiff alleges that Google abused its monopoly power in the search advertising market by, in part, scraping material to create a generative AI program, launching the Bard chatbot without sufficient development in order to undermine competition, and introducing “search generative experiences,” which involve responding to user searches by directing them to a summary of other websites say rather than to the websites themselves.
II. Connected & Automated Vehicles
- The White House EO: The White House’s EO on Safe, Secure, and Trustworthy Artificial Intelligence, referenced above, included a number of CAV-related provisions. The EO directed the Secretary of Transportation to, within 30 days, direct the Nontraditional and Emerging Transportation Technology Council to assess the need for information and guidance regarding the use of AI in transportation, including by supporting existing and future initiatives to pilot transportation-related applications of AI. Under the EO, the Secretary of Transportation also must direct appropriate Federal Advisory Committees of the Department of Transportation (“DOT”) to provide advice on the safe and responsible use of AI in transportation by the end of January 2024. Finally, within 180 days of the EO, the Secretary of Transportation must direct the Advanced Research Projects Agency-Infrastructure to explore the transportation-related opportunities and challenges of AI, including software-defined AI enhancements impacting autonomous mobility ecosystems.
- NHTSA Notice and Request for Comment on Driving Automation Systems: The National Highway Traffic Safety Administration (“NHTSA”) took steps to increase its understanding of potential safety issues implicated by driving automation systems (“DAS”), issuing a notice and request for comments on a request for approval of a new information collection regarding human interaction with DAS on December 12th. NHTSA proposed to perform research involving the collection of information from the public as part of a multi-year effort to learn about how humans interact with DAS, which will “support NHTSA in understanding the potential safety challenges associated with human-DAS interactions, particularly in the context of mixed traffic interactions where some vehicles have DAS and others do not” and where some vehicles are equipped with DAS that have varying levels of automation. The proposed project will examine driving performance measures (such as takeover time and reaction time), measure understanding of and trust in the automation through questionnaires, and measure risk taking through questionnaires.
- FCC Letters to Carmakers Regarding Connectivity and Domestic Violence: In early January 2024, the FCC took steps to increase its understanding of certain safety issues implicated by connected vehicles by sending letters to several automotive manufacturers regarding the potential for wireless connectivity and location data to negatively impact partners in abusive relationships. To help the FCC understand how it can better fulfill its duties under the Safe Connections Act – which provides the FCC with authority to assist survivors of domestic violence and abuse with secure access to communications – the FCC requested that letter recipients respond to a series of questions about current and planned connectivity options, policies in place to remove access to connected apps at the request of domestic violence survivors, and how the company retains, shares, and/or sells a driver’s geolocation data collected by connected apps.
- Funding Opportunities: The federal government announced two funding opportunities this past quarter. On November 15th, the Federal Transit Administration announced the opportunity to apply for $4.7M in FY23 funding under the Innovative Coordinated Access and Mobility pilot program. This funding opportunity “seeks to improve coordination to enhance access and mobility to vital community services for older adults, people with disabilities, and people of low income.” The Notice of Funding Opportunity provides that if an applicant is “proposing to implement autonomous vehicles or other innovative motor vehicle technology, the application should demonstrate that all vehicles will comply with applicable safety requirements,” including those administered by NHTSA and the Federal Motor Carrier Safety Administration (e.g., the Federal Motor Vehicle Safety Standards and Federal Motor Carrier Safety Regulations). Applicants must submit completed proposals by February 13th. Additionally, on December 13th, the DOT announced a $25M funding opportunity for its Rural Autonomous Vehicle research program. Accredited universities may apply for the six-year cooperative agreement program. Recipients will use program funding to conduct research on the benefits and responsible application of AVs and associated mobility technologies in rural and Tribal communities.
- Stakeholder Advocacy: On the stakeholder front, on December 7th, eighteen organizations sent a letter to DOT Secretary Pete Buttigieg stating that the CAV industry is “at a critical juncture and in need of strong leadership from USDOT” and urging the Department to “use existing authorities to assert its jurisdiction over the design, construction, and performance of motor vehicles, including those deploying emerging technology.” The letter specifically encouraged DOT to move forward with a Notice of Proposed Rulemaking on the ADS-equipped Vehicle Safety, Transparency, and Evaluation Program (“AV STEP”) ─ a program announced in July wherein NHTSA would consider applications for deploying noncompliant ADS vehicles, subject to review processes, terms, and conditions, to collect data and enhance research into AV safety and performance. DOT has yet to respond to the letter or issue a Notice of Proposed Rulemaking.
- Updated FHA Manual: Finally, on December 18th, the Federal Highway Administration published the 11th Edition of the Manual on Uniform Traffic Control Devices 2023. The Manual includes considerations for agencies to prepare roadways for automated vehicle technologies and to support the safe deployment of automated driving systems.
III. Data Privacy & Cybersecurity
With respect to privacy, California and Colorado state regulators advanced a number of regulations to better define the scope of privacy laws in each state respectively, including advancing rules for opt-out signals and new regulations, and the FTC brought a number of enforcement actions.
- New Rules for Opt-out Signals: At its December 8th board meeting, the California Privacy Protection Agency (“CPPA”) included a legislative proposal that would require vendors of web browsers to include a feature that would allow consumers to exercise data subject rights through opt-out preference signals. The Colorado Attorney General also announced that the Global Privacy Control (“GPC”) will become the first universal opt-out mechanism the Attorney General considers valid under the Colorado Privacy Act.
- Additional CCPA Regulations: The CPPA also proposed draft rules on additional topics, including opt-out and access rights for automated decisionmaking technology, privacy risk assessments, and cybersecurity audits. As a next step, the CPPA will initiate formal rulemaking, at which point, the public can provide comments on the proposed rules.
- Key FTC Enforcement Actions: The FTC continued to bring enforcement actions related to companies’ privacy practices. For example, on December 19th, the FTC announced that it reached a settlement with Rite Aid Corporation and Rite Aid Headquarters Corporation to resolve allegations that the companies violated Section 5 of the FTC Act. The FTC alleged that the companies used facial recognition in stores without taking reasonable measures to prevent harm to consumers, including by failing to test the accuracy of the facial recognition technology and failing to oversee and train employees.
Cybersecurity regulation and enforcement continued to be a priority for both federal and state regulators, including with respect to infrastructure and finance.
- Infrastructure: On October 16th, the U.S. Cybersecurity and Infrastructure Security Agency (“CISA”) released updated guidance on Security-by-Design and Security-by-Default principles for technology and software manufacturers, which it originally published in April 2023. The latest guidance – published in coordination with the U.S. Federal Bureau of Investigation, U.S. National Security Agency, and thirteen international partners – provides additional recommendations for software manufacturers (including manufacturers of artificial intelligence software systems and models) to improve the security of their products.
- Finance: On October 27th, the FTC amended its Safeguards Rule to require non-banking financial institutions to report data security breaches. The amendment requires non-bank financial institutions to report when they discover that information affecting 500 or more people has been acquired without authorization. Additionally, on November 1st, the New York Department of Financial Services (“NYDFS”) announced that it had finalized its “first-in-the-nation” cybersecurity regulation. This Amendment implemented many of the changes that NYDFS originally proposed in prior versions of the regulations. These include: (1) removing the previously-proposed requirement that each class A company conduct independent audits of its cybersecurity program “at least annually” ─ the regulation does require each class A company to conduct such audits based on its risk assessments; (2) requiring confirmation that a covered entity’s management has allocated sufficient resources to implement and maintain a cybersecurity program; and (3) removing a proposed additional requirement to report certain privileged account compromises to NYDFS while retaining requirements for covered entities to report certain ransomware deployments or extortion payments.