On April 3, 2025, the Budapest District Court made a request for a preliminary ruling to the Court of Justice of the European Union (“CJEU”) relating to the application of EU copyright rules to outputs generated by large language model (LLM)-based chatbots, specifically Google’s Gemini (formerly Bard). This Case C-250/25 involves a dispute between Like Company, a Hungarian news publisher, and Google Ireland Ltd.

The parties’ claims are as follows:

  • Like Company claims Google’s chatbot reproduces and makes available its protected press publications without consent, exceeding the exception for “use of individual words or very short extracts” under Article 15(1) of the DSM Directive, thus harming its economic rights. It argues the chatbot’s responses amount to both reproduction and communication to the public, necessitating authorization. Additionally, it contends the LLM’s training involved unauthorized reproduction of its content, beyond what Article 4 of the DSM Copyright Directive permits.
  • Google counters that the chatbot’s outputs do not constitute reproduction or communication to the public under Hungarian or EU law. It argues the chatbot does not serve a “new public” as required by CJEU case law, and much of the chatbot’s output is generated or altered content (including hallucinations), not mere copied material. Google asserts that even if reproduction occurred, it falls under exceptions for temporary reproduction (Article 5(1) of the Infosec Directive) and text and data mining (Article 4 of DSM Copyright Directive). Google argues that Gemini (Bard) is a creative tool, not a database, and respects users’ rights under Article 11 of the EU Charter of Fundamental Rights (freedom of expression and information).

The referral involves several provisions of Directive (EU) 2019/790, known as the DSM Copyright Directive. Article 15(1) of the DSM Copyright Directive grants press publishers certain rights with respect to the use of their publications by information society service providers, and Article 4 of the DSM Copyright Directive provides an exception to the rights of reproduction and extraction for the purposes of text and data mining, provided the use is not expressly reserved by the rights holder. The referral also concerns several provisions of Directive 2001/29/EC, known as the InfoSoc Directive. These include Article 2, which provides authors and certain other creators with the exclusive right to authorize or prohibit the reproduction of their works or other protected subject matter, and Article 3(2), which provides authors with the exclusive right to authorize or prohibit the “communication to the public of their works”.

The Hungarian Court submitted four questions that ask, essentially:

  1. Does the chatbot’s display of text similar to press content—and of a length protected by Article 15 of the DSM Copyright Directive—constitute “communication to the public” under the cited provisions, and if so, does it matter that the chatbot output is the “result of a process in which the chatbot merely predicts the next word on the basis of observed patterns”?
  2. Does training the LLM amount to “reproduction” under Article 2 of the InfoSec Directive, given the model’s reliance on observed linguistic patterns?
  3. If the answer to question (2) is yes, does the reproduction fall under the text and data mining exception in Article 4 of the DSM Copyright Directive, which allows certain uses without permission unless rights holders have opted out?
  4. When a user prompts the chatbot with text matching or referring to a press publication, and the chatbot’s response contains part or all of that publication, does this constitute reproduction by the service provider?

This will be the first case in which the CJEU will address legal questions at the intersection of generative AI and EU copyright law.  A judgment is expected by late-2026.

The Covington team continues to monitor regulatory developments on AI, and we regularly advise the world’s top technology companies on their most challenging regulatory and compliance issues in the EU and other major markets.  If you have questions about AI regulation, or other tech regulatory matters, we are happy to assist with any queries.

Photo of Sam Jungyun Choi Sam Jungyun Choi

Recognized by Law.com International as a Rising Star (2023), Sam Jungyun Choi is an associate in the technology regulatory group in Brussels. She advises leading multinationals on European and UK data protection law and new regulations and policy relating to innovative technologies, such…

Recognized by Law.com International as a Rising Star (2023), Sam Jungyun Choi is an associate in the technology regulatory group in Brussels. She advises leading multinationals on European and UK data protection law and new regulations and policy relating to innovative technologies, such as AI, digital health, and autonomous vehicles.

Sam is an expert on the EU General Data Protection Regulation (GDPR) and the UK Data Protection Act, having advised on these laws since they started to apply. In recent years, her work has evolved to include advising companies on new data and digital laws in the EU, including the AI Act, Data Act and the Digital Services Act.

Sam’s practice includes advising on regulatory, compliance and policy issues that affect leading companies in the technology, life sciences and gaming companies on laws relating to privacy and data protection, digital services and AI. She advises clients on designing of new products and services, preparing privacy documentation, and developing data and AI governance programs. She also advises clients on matters relating to children’s privacy and policy initiatives relating to online safety.

Photo of Madelaine Harrington Madelaine Harrington

Madelaine Harrington is an associate in the technology and media group. Her practice covers a wide range of regulatory and policy matters at the cross-section of privacy, content moderation, artificial intelligence, and free expression. Madelaine has deep experience with regulatory investigations, and has…

Madelaine Harrington is an associate in the technology and media group. Her practice covers a wide range of regulatory and policy matters at the cross-section of privacy, content moderation, artificial intelligence, and free expression. Madelaine has deep experience with regulatory investigations, and has counseled multi-national companies on complex cross-jurisdictional fact-gathering exercises and responses to alleged non-compliance. She routinely counsels clients on compliance within the EU regulatory framework, including the General Data Protection Regulation (GDPR), among other EU laws and legislative proposals.

Madelaine’s representative matters include:

coordinating responses to investigations into the handling of personal information under the GDPR,
counseling major technology companies on the use of artificial intelligence, specifically facial recognition technology in public spaces,
advising a major technology company on the legality of hacking defense tactics,
advising a content company on compliance obligations under the DSA, including rules regarding recommender systems.

Madelaine’s work has previously involved representing U.S.-based clients on a wide range of First Amendment issues, including defamation lawsuits, access to courts, and FOIA. She maintains an active pro-bono practice representing journalists with various news-gathering needs.