This article is for informational purposes only and does not constitute legal advice or a legal opinion. Organizations should consult their counsel and internal legal teams regarding how these issues apply to their specific circumstances.
It’s the “question of first impression.” When someone communicates with a publicly available AI platform in a pending criminal investigation, are those communications protected by attorney-client privilege or the work product doctrine?
According to a Manhattan federal court judge, the answer is an unequivocal “no.” And the February 17, 2026 memorandum makes this the first written federal court opinion directly addressing AI-generated materials and legal privilege.
This ruling creates an immediate compliance problem for corporate legal departments and government agencies.
What Happened in United States v. Heppner
Bradley Heppner, former CEO of Beneficient Company Group, faces federal fraud charges in an alleged $150 million scheme involving false representations to investors. After receiving a grand jury subpoena and retaining defense counsel at Quinn Emanuel, but before his arrest in November 2025, Heppner used the consumer version of Anthropic’s Claude AI platform to generate 31 documents analyzing his legal exposure, potential defense theories, and strategic options.
Heppner shared those 31 AI-generated documents with his lawyers. On his privilege log, they were described as “artificial intelligence-generated analysis conveying facts to counsel for purpose of obtaining legal advice." When the FBI executed a search warrant at Heppner’s home and seized his electronic devices, the government moved for a ruling that the AI documents were neither privileged nor protected work product.
U.S. District Judge Jed Rakoff agreed. On February 10, Judge Rakoff ruled in United States v. Heppner, No. 25-cr-00503 (S.D.N.Y.), that the 31 documents a criminal defendant created using a consumer AI tool and then sent to his lawyers were not protected by attorney-client privilege or the work product doctrine. One week later, the first-of-its-kind written federal court opinion about AI-generated materials and legal privilege followed.
The Court’s Reasoning: 3 Failures
Judge Rakoff’s written opinion systematically undermined the privilege claim on three independent grounds.
-
No attorney-client communication existed: The court held that Claude is not an attorney and cannot form an attorney-client relationship. The opinion rejected the argument that AI inputs are more like using word processing software, noting that all recognized privileges require a “trusting human relationship” with “a licensed professional who owes fiduciary duties and is subject to discipline.”
-
The communications were not confidential: The court pointed to the AI platform’s privacy policy, which states that the company collects user inputs and AI outputs, uses that data for training, and reserves the right to disclose it to third parties — including “governmental regulatory authorities” — even absent a subpoena. Under these terms, Heppner had no reasonable expectation of confidentiality. The court distinguished this from cases protecting confidential client notes prepared for counsel, because Heppner initially shared those notes with a third party.
-
The purpose was not to obtain legal advice: The court acknowledged this was a “closer call” since Heppner’s counsel asserted the documents were created for the “express purpose of talking to counsel.” However, defense counsel also conceded that Heppner acted on his own initiative: His counsel didn’t direct him to use the AI tool. The court pointed out how the AI platform refuses to acknowledge providing legal advice, and that had counsel directed Heppner to use the tool, the analysis might have been different. In that situation, the AI could arguably have functioned as a lawyer’s agent.
Work Product Doctrine Was Also Rejected
The opinion separately addressed work product protection, which shields materials prepared by or at the direction of counsel in anticipation of litigation. The court rejected this claim because:
-
The AI documents were created by Heppner on his own initiative, not at counsel’s direction
-
The documents did not reflect defense counsel’s strategy (they “affected” strategy going forward but didn’t “reflect” it at the time of creation)
-
Neither Heppner nor the AI tool qualifies as counsel or counsel’s agent
The court explicitly disagreed with an earlier SDNY magistrate ruling (Shih v. Petal Card) that had suggested work product protection could apply to materials prepared by a non-attorney without attorney direction, reinforcing that the purpose of the doctrine is to protect lawyers’ mental processes.
The Most Alarming Aspect of the Ruling: The Waiver Problem
Perhaps most alarming for organizations is the waiver analysis. Heppner inputted information he received from his attorneys, information that was presumably privileged, into the consumer AI tool.
The court held that by sharing that privileged information with the AI platform (and by extension, with the platform’s operator under its privacy policy), Heppner waived the privilege over that information, just as if he had shared it with any other third party.
This is a critical organizational risk: feeding privileged information into consumer AI tools can destroy attorney-client protection, even if the outputs are later shared with counsel.
Attorney Direction Is the Most Important Procedural Takeaway
Judge Rakoff stated that if Heppner was acting on instructions from his counsel, “Claude might arguably be said to have functioned in a manner akin to a highly trained professional who may act as a lawyer’s agent within the protection of the attorney-client privilege.”
The distinction between someone using AI on their own or doing so at the direction of counsel is now the procedural line. Heppner’s lawyers conceded that they didn’t direct him to run the AI queries, which undermined the privilege and work product claims.
The distinction creates the single most actionable takeaway: If employees or clients need to use AI tools to prepare facts, narratives, or summaries for legal purposes, that work must be initiated, directed, and documented by counsel.
But attorney direction alone doesn’t solve the entire problem. It addresses the “purpose” and “at the direction of counsel” elements. It does not, by itself, fix the confidentiality gap, which depends on the AI tool being used.
Not All AI Tools Are Created Equal: The Confidentiality Spectrum
Does any of this apply if you use the enterprise version? The answer requires understanding that AI tools exist on a spectrum in terms of confidentiality. How the Heppner ruling applies depends on where your tool sits.
-
Tier 1: Consumer AI tools. Heppner used this type. Standard terms allow data collection, model training, and disclosure to third parties including government authorities. The court found no reasonable expectation of confidentiality. Using consumer AI for legal matters is now indefensible and privilege arguments are unavailable.
-
Tier 2: Enterprise AI with negotiated agreements. Many organizations use enterprise licenses for platforms like ChatGPT Enterprise, Claude for Enterprise, or Microsoft Copilot with negotiated terms that include contractual confidentiality obligations, no training on customer data, and no right to disclose inputs to third parties. These features directly address the confidentiality gaps identified in Heppner. Privilege protection is stronger, but untested in court. Any carve-outs for anonymized usage data or analytics could still create a waiver risk. Organizations must review the actual terms and not assume that enterprise licenses are automatically confidential. Even with enterprise terms, human review by the AI provider (for safety or troubleshooting) could still be argued as a "third-party disclosure" by an aggressive prosecutor, making "Tier 3" (on-prem/dedicated environment) the only true "gold standard."
-
Tier 3: AI integrated into legal workflow platforms. When AI operates within eDiscovery, legal hold, or case management systems on data already under your control, behind security perimeters, and under existing agreements, third-party disclosure is not an issue. Access controls, audit trails, and confidentiality protections apply as they would for any other document, effectively eliminating Heppner-type risks.
Best scenario: Attorney-directed use with tier 2 or 3 tools.
Worst scenario: Employee uses a consumer (tier 1) tool independently as seen in the Heppner’s ruling.
Government Agencies Face Compounding Risks
Federal, state, and local government agencies face a version of this problem that’s broader and more complicated than what corporate legal departments face.
FOIA exposure. Beyond privilege issues, government agencies have ongoing disclosure obligations under the Freedom of Information Act and state sunshine laws. The question is not only whether a document is privileged, it’s whether it qualifies as an agency record subject to production upon request. When a government employee uses AI to prepare materials related to agency business, including materials intended for consultation with agency counsel, those AI-generated documents may constitute agency records subject to FOIA. The deliberative process privilege, which agencies typically rely on to protect pre-decisional documents, may not apply to materials generated through a third-party AI platform where confidentiality was never established.
Security authorization. Consumer AI tools do not have the security authorizations required for government data handling, including FedRAMP®, IL4/IL5, or CMMC. Even enterprise versions of major AI platforms may lack the specific authorizations required for an agency’s data classification level. Many agencies maintain authorized tool lists, and using any AI tool that is not on that list, whether enterprise or not, could violate security policies regardless of privilege considerations. For federal defense and intelligence agencies, this creates a dual compliance risk: privilege exposure and a potential security incident.
Workforce reality. Government agencies facing headcount reductions and hiring freezes are under increasing pressure to use AI to maintain operational capacity. This pressure increases the likelihood that employees will turn to consumer or unauthorized AI tools to complete their work, creating exactly the kind of unsanctioned AI use that Heppner has made discoverable. Agencies need authorized, secure AI tools that are both available and usable, or employees will find alternatives on their own.
What Organizations Should Do Now
-
Require attorney direction for all AI-assisted legal work. Any AI work related to legal matters should be initiated, directed, and documented by counsel. Include this requirement in engagement letters, legal hold notices, and employee policies.
-
Ban consumer AI tools for legal matters. Free or paid consumer AI accounts should not be used for anything involving legal work. Make this a non-negotiable rule instead of a guideline.
-
Audit your enterprise AI agreements. Don’t assume enterprise licenses automatically provide confidentiality. Have your legal department review the data processing terms carefully for any carve-outs related to usage data, analytics, model training, or third-party disclosure, and renegotiate or switch tools if necessary.
-
Prioritize AI tools within your legal workflow platform. AI that operates on data already under your control and within your secure environment avoids third-party disclosure risks. This approach is more defensible than relying solely on contractual protections that have not been tested in court.
-
Update eDiscovery and FOIA protocols. AI-generated documents are now a recognized category of discoverable material. Make sure your legal hold notices, collection procedures, and privilege review processes account for AI outputs across all platforms, including personal devices.
-
Educate immediately. All employees, executives, and agency staff must understand that AI conversations may be discoverable, and that entering privileged information into the wrong AI tool can destroy attorney-client privilege. Don’t wait for the next court ruling to act.
The Bottom Line
Judge Rakoff’s opinion in the Heppner case establishes that “create with AI, send to lawyer” is not a privilege strategy. Consumer AI privacy policies can defeat confidentiality claims, and feeding privileged information into AI tools can waive privilege over the original communications.
Attorney-directed AI use on a platform that maintains confidentiality is different. Organizations that get their tools, agreements, and procedures right can use AI for legal work confidently. Those that don’t are creating discoverability risks they may not yet recognize. This ruling is a wake-up call.
Casepoint’s eDiscovery, legal hold, and FOIA solutions — including GenAI capabilities for document summarization, chat-based search, and document classification — operate entirely within your secure environment on data you control. No third-party data sharing, no privilege exposure, and no new FOIA risk. Whether for corporate legal departments or government agencies, we help organizations use AI for legal work safely and defensibly.
Author
VP of Product Marketing and Revenue Enablement
Amit Dungarani serves as Vice President of Product Marketing and Revenue Enablement at Casepoint, where he leads strategic initiatives to align the company's comprehensive portfolio of enterprise solutions with the complex needs of large corporations and government agencies. With over 23 years of leadership experience spanning enterprise…
Categories: