On February 10, 2026, federal district court Judge Jed S. Rakoff ruled from the bench in the Southern District of New York that the attorney-client privilege and the work product doctrine did not protect legal strategy materials that a criminal defendant generated using a generative AI tool, when he used a public version of the tool and was not instructed by his attorney to generate these materials.  On February 17, 2026, the court issued a written memorandum explaining its reasoning.  

The question presented – an issue of first impression – was: “whether when a user communicates with a publicly available AI platform in connection with a pending criminal investigation, are the communications protected by attorney-client privilege or the work product doctrine?”  The court’s answer was no given the unique circumstances of the case – namely, that no lawyer was involved in the back-and-forth with the AI tool, and the tool itself was a public (i.e., non-confidential) version. 

Below, we summarize the background of the case, the decision, and key takeaways on AI and Legal Privilege.

Background.  On October 28, 2025, Heppner, the defendant, was indicted and charged with several federal crimes, including securities fraud, arising from his time as an executive at several corporations.  When Heppner was arrested, the FBI executed a search warrant and seized a number of documents and electronic devices, which included communications that Heppner had with a publicly available version of a generative AI platform that “prepared reports that outlined defense strategy, [and] outlined what he might argue with respect to the facts and the law” of what the government may charge.  Heppner had engaged in these communications with the AI tool after he had received a grand jury subpoena and after it was clear from discussions with the government that he was the target of the investigation.  Heppner had not been instructed by his counsel to engage with the tool.  Heppner asserted privilege and attorney work product over the materials, arguing that he had (1) input into the platform information that he had learned from counsel; (2) created the AI communications for the purpose of speaking with counsel to obtain legal advice; and (3) subsequently shared the AI communications with counsel for the purpose of  obtaining legal advice.

Attorney-Client Privilege.  The court first ruled that the AI-generated materials failed to satisfy the elements of the attorney-client privilege.  Attorney-client privilege attaches to (1) communications between a client and his or her attorney (2) that are intended to be, and were, kept confidential (3) for the purpose of obtaining or providing legal advice.  The court held that at least two, if not all three, elements were not met. 

  1. Not Communications with Counsel – The court held that the AI communications were not between Heppner and his counsel because the AI tool was not an attorney, and Heppner was consulting the tool entirely on his own (i.e., not at the direction of counsel).
  2. Not Confidential – The AI communications were not confidential because Heppner communicated with a public version of a third-party AI platform, and its privacy policy did not support a reasonable expectation of confidentiality.  The court noted that the platform’s privacy policy specified that the platform collects data on user inputs and outputs, which it uses to train the tool, and reserved the right to disclose such data to third parties, including governmental regulatory authorities.
  3. For Legal Advice – Heppner did not communicate with the AI tool for the purpose of obtaining legal advice because he did not do so at the direction of counsel, rather he did so out of his own initiative.  The court added that even if the information inputted into the AI platform was privileged, any such privilege was waived by sharing the information with the platform.

Work Product Doctrine.  The court also held that the work product doctrine did not apply to the materials generated from the public, non-proprietary AI tool.  The court explained that the work product doctrine “provides qualified protection for materials prepared by or at the behest of counsel in anticipation of litigation or for trial.”  Even assuming that the AI-generated materials were prepared in anticipation of litigation, the court concluded that they were not prepared by or at the behest of an attorney and did not reflect defense counsel’s strategy.  As such, the court held that the work product doctrine did not apply.

Because neither the attorney-client privilege nor the work product doctrine applied, the court held that the materials should be disclosed to the prosecution.

Privilege Takeaways.  The court’s decision is one of what will likely be many decisions analyzing the intersection of AI and legal privileges, but its holdings are confined to its facts, which are quite specific.  Of particular note, Heppner was using a publicly available (non-confidential) version of an AI tool and was not operating at the direction of counsel.

As courts continue to address these questions, there are concrete steps that in-house counsel and companies can take to minimize the risks posed by AI in the context of privilege, litigation, and government investigations:

  • Define Acceptable Use of AI – Implement a policy that defines the acceptable use of AI, including the use of AI in circumstances that may implicate legal considerations, such as the risk of creating discoverable documents.  For example, consider limiting use to approved, proprietary enterprise AI tools, providing clear guidelines for the enterprise use of AI, implementing limits on the retention of AI prompts and outputs, and providing special guidance for uses that may implicate legal considerations. 
  • User Training & Awareness – Educate users (including attorneys) on the appropriate use of AI tools and the risks posed by AI, including the potential risk of disclosure of privileged communications and work product.  For example, it may be helpful to convey that if a non-lawyer independently consults a public version of an AI tool on legal matters, those communications are not likely to be privileged. 
  • Beware of “Third-Party” Risk – Keep in mind and train others that the use of publicly available AI tools may be viewed as a disclosure to third parties, which may undermine, weaken, or waive claims of privilege.  To mitigate the risk of disclosure, consider the availability of proprietary AI tools and train users on the benefits of risks of different kinds of tools.
Photo of Micaela McMurrough Micaela McMurrough

Micaela McMurrough serves as co-chair of Covington’s global and multi-disciplinary Technology Group, as co-chair of the Artificial Intelligence and Internet of Things (IoT) initiative. In her practice, she has represented clients in high-stakes antitrust, patent, trade secrets, contract, and securities litigation, and other…

Micaela McMurrough serves as co-chair of Covington’s global and multi-disciplinary Technology Group, as co-chair of the Artificial Intelligence and Internet of Things (IoT) initiative. In her practice, she has represented clients in high-stakes antitrust, patent, trade secrets, contract, and securities litigation, and other complex commercial litigation matters, and she regularly represents and advises domestic and international clients on cybersecurity and data privacy issues, including cybersecurity investigations and cyber incident response. Micaela has advised clients on data breaches and other network intrusions, conducted cybersecurity investigations, and advised clients regarding evolving cybersecurity regulations and cybersecurity norms in the context of international law.

In 2016, Micaela was selected as one of thirteen Madison Policy Forum Military-Business Cybersecurity Fellows. She regularly engages with government, military, and business leaders in the cybersecurity industry in an effort to develop national strategies for complex cyber issues and policy challenges. Micaela previously served as a United States Presidential Leadership Scholar, principally responsible for launching a program to familiarize federal judges with various aspects of the U.S. national security structure and national intelligence community.

Prior to her legal career, Micaela served in the Military Intelligence Branch of the United States Army. She served as Intelligence Officer of a 1,200-member maneuver unit conducting combat operations in Afghanistan and was awarded the Bronze Star.

Photo of Matthew Harden Matthew Harden

Matthew Harden is a cybersecurity and litigation associate in the firm’s New York office. He advises on a broad range of cybersecurity, data privacy, and national security matters, including cybersecurity incident response, cybersecurity and privacy compliance obligations, internal investigations, and regulatory inquiries. He…

Matthew Harden is a cybersecurity and litigation associate in the firm’s New York office. He advises on a broad range of cybersecurity, data privacy, and national security matters, including cybersecurity incident response, cybersecurity and privacy compliance obligations, internal investigations, and regulatory inquiries. He works with clients across industries, including in the technology, financial services, defense, entertainment and media, life sciences, and healthcare industries.

As part of his cybersecurity practice, Matthew provides strategic advice on cybersecurity and data privacy issues, including cybersecurity investigations, cybersecurity incident response, artificial intelligence, and Internet of Things (IoT). He also assists clients with drafting, designing, and assessing enterprise cybersecurity and information security policies, procedures, and plans.

As part of his litigation and investigations practice, Matthew leverages his cybersecurity experience to advise clients on high-stakes litigation matters and investigations. He also maintains an active pro bono practice focused on veterans’ rights.

Matthew currently serves as a Judge Advocate in the U.S. Coast Guard Reserve.

Photo of Bryan Ramirez Bryan Ramirez

Bryan Ramirez is an associate in the firm’s San Francisco office and is a member of the Data Privacy and Cybersecurity Practice Group. He advises clients on a range of regulatory and compliance issues, including compliance with state privacy laws. Bryan also maintains…

Bryan Ramirez is an associate in the firm’s San Francisco office and is a member of the Data Privacy and Cybersecurity Practice Group. He advises clients on a range of regulatory and compliance issues, including compliance with state privacy laws. Bryan also maintains an active pro bono practice.