Attorney-Client Privilege and AI: What the Heppner Ruling Means for Legal Professionals
In February 2026, a federal judge ruled that AI-generated documents are not protected by attorney-client privilege. Here's what legal professionals need to know — and how to protect client confidentiality when using AI.
The Ruling That Changed Attorney-Client Privilege Forever
On February 12, 2026, federal Judge Jed Rakoff of the Southern District of New York issued a ruling in United States v. Heppner that will define the relationship between artificial intelligence and attorney-client privilege for years to come. The court held that documents generated using consumer AI tools — including ChatGPT, Claude, and Gemini — are not protected by attorney-client privilege when the attorney disclosed confidential client information to the AI provider during the generation process.
Attorney-client privilege AI risks are no longer theoretical. They are precedent.
The ruling is the first federal decision to directly address whether AI-assisted legal work product retains privilege protection when client data is transmitted to a third-party AI provider. Its implications extend far beyond the facts of the case.
The Facts of United States v. Heppner
The case arose from a white-collar criminal prosecution involving corporate fraud. Defense counsel for Marcus Heppner, a former financial services executive, used ChatGPT over four months to assist with case preparation — entering client interview notes, internal corporate communications, financial records, and draft defense strategy memoranda into OpenAI's consumer product.
During discovery, prosecutors sought production of the AI-generated documents. Defense counsel objected, asserting attorney-client privilege and work-product protection. The government argued that voluntarily transmitting confidential client information to OpenAI — a third party with no obligation of confidentiality — waived privilege. The court agreed.
Judge Rakoff's Reasoning
Judge Rakoff's 34-page opinion walked through the foundational elements of attorney-client privilege and applied them methodically to AI-assisted legal work.
The third-party disclosure doctrine. The court began with the well-established principle that attorney-client privilege is waived when the client — or the attorney acting on the client's behalf — voluntarily discloses confidential information to a third party. OpenAI, as the operator of ChatGPT, is unambiguously a third party. The court rejected the argument that OpenAI functions as a mere "tool" analogous to a word processor, finding that data transmitted to ChatGPT is received, processed, and stored by OpenAI's servers.
No reasonable expectation of confidentiality. The court examined OpenAI's terms of service and privacy policy — noting data retention of up to 30 days, potential use for model improvement, and human review of flagged conversations — and concluded that an attorney transmitting client data under these terms cannot claim a reasonable expectation of confidentiality.
The scope of waiver. Perhaps most significantly, the court applied the subject-matter waiver doctrine, holding that privilege loss extends beyond the specific documents entered into the AI to cover the entire subject matter of those communications — including related documents that were never entered into the AI system.
Work-product protection. The court applied a parallel analysis to work-product claims, finding that voluntary disclosure of attorney mental impressions and legal strategies to a third-party AI provider substantially undermined the claim. The court ordered production of the AI-generated documents.
Why This Ruling Matters Beyond the Heppner Case
The Heppner ruling is a trial court decision, not binding appellate precedent. But its influence is already substantial, for several reasons.
First, Judge Rakoff is one of the most respected and frequently cited federal trial judges in the country, and his opinions carry outsized persuasive authority in commercial and white-collar matters.
Second, the reasoning is doctrinally straightforward — the court applied existing privilege doctrine to new facts, making the holding difficult to distinguish and likely to be followed by other courts.
Third, the ruling arrived when the profession was already grappling with AI governance. According to a 2025 ABA survey, 35% of attorneys have used generative AI in their practice, but only 12% reported having a formal AI usage policy. The Heppner ruling transformed an abstract compliance concern into an immediate, concrete risk.
The statistics are stark: a 2025 McKinsey survey found that 68% of knowledge workers in regulated industries use generative AI tools at work, and 44% admit to entering client, patient, or customer data into those tools. For attorneys, each instance of entering privileged information into a consumer AI tool is now a potential privilege waiver — with consequences that extend to the entire subject matter of the communication.
The Broader Regulatory and Ethics Landscape
The Heppner ruling did not emerge in a vacuum. It is part of a rapidly developing framework of AI ethics guidance for the legal profession.
ABA Formal Opinion 512 (2024)
The American Bar Association issued Formal Opinion 512 in late 2024, confirming that attorneys' duties of competence (Model Rule 1.1) and confidentiality (Model Rule 1.6) apply fully to the use of AI tools. The opinion stated that attorneys must understand how AI tools process and store data, must make "reasonable efforts" to prevent unauthorized disclosure when using AI, and must stay current on technological developments that affect client data security. While Formal Opinion 512 stopped short of prohibiting consumer AI use, it laid the groundwork for the Heppner court's analysis by establishing that attorneys bear responsibility for understanding the data practices of the tools they use.
State Bar Ethics Opinions
Since the Heppner ruling, at least 14 state bar associations have issued emergency ethics guidance on AI use. Notable developments include:
- New York State Bar Association issued an emergency ethics opinion requiring attorneys to conduct a data privacy assessment before using any AI tool with client data
- California State Bar updated its formal ethics opinion to state that using consumer AI tools with unredacted client data may violate Rule 1.6 absent informed client consent
- Florida Bar issued guidance requiring disclosure of AI tool usage in engagement letters for all new matters
- Texas State Bar published an ethics advisory warning that AI-assisted privilege waivers may trigger malpractice liability
What Other Courts May Do
Legal scholars debate whether appellate courts will adopt the full Heppner framework — some argue the subject-matter waiver holding is too broad, while others contend the third-party disclosure analysis is so firmly rooted in existing doctrine that departure is unlikely. But the direction of travel is not in dispute: courts, regulators, and bar associations are converging on the principle that transmitting unprotected client data to a third-party AI provider creates serious privilege risks that attorneys must address proactively.
What Legal Professionals Should Do Now
The Heppner ruling demands an immediate and structured response from every law firm and legal department that uses AI tools. The following steps move from urgent triage to long-term infrastructure.
1. Conduct an Immediate AI Usage Audit
Survey every attorney, paralegal, and staff member in your organization. Determine which AI tools are being used, what client data has been entered, and for which matters. Include personal accounts and unauthorized tools. The results will determine the scope of your privilege exposure.
2. Assess Privilege Exposure for Active Matters
For any matter where client data was entered into a consumer AI tool, conduct a privilege review with ethics counsel. Evaluate whether the Heppner subject-matter waiver analysis could apply and whether proactive disclosure to opposing counsel or the court is warranted. Early voluntary disclosure is generally better than forced production after a motion to compel.
3. Implement Privacy-First AI Architecture
The most effective response is architectural: adopt AI tools that de-identify all personally identifiable information before data leaves your environment. PrivacyFrom.AI automatically replaces names, dates, case numbers, financial figures, and 50+ other entity types with reversible tokens before any data reaches the AI model. The AI processes only anonymized content, and results are re-identified on your local device.
This eliminates the third-party disclosure that triggered the Heppner waiver. If the AI model never receives "Marcus Heppner's internal communications regarding the Q3 revenue adjustments," there is no privileged information to disclose — and no privilege to waive.
4. Update Engagement Letters and Client Disclosures
Post-Heppner, clients must be informed about AI tool usage in their matters. Update engagement letters to disclose which AI tools are used, what safeguards are in place, and how client data is protected. Several state bars now require this, and informed consent provides an additional layer of protection against malpractice claims.
5. Establish a Written AI Governance Policy
Draft or update a firm-wide AI acceptable use policy covering approved tools, prohibited data inputs, de-identification requirements, human review standards, and training schedules. Assign responsibility for compliance and conduct periodic audits. For a detailed framework, see our guide on how law firms can use AI without risking confidentiality.
6. Train Your Entire Team
Training must be specific, recurring, and role-based. Ensure every person who touches client data understands that pasting it into any unapproved AI tool can trigger the same privilege consequences at issue in Heppner. Quarterly updates on new rulings and ethics opinions are the minimum standard.
7. Document Your Safeguards
Maintain detailed records of your AI governance measures — audit results, training logs, policy acknowledgments, and vendor assessments. Under ABA Model Rule 1.6, "reasonable efforts" is the standard, and documentation is how you prove you met it.
The Cost of Inaction
The hidden costs of using ChatGPT with client data are no longer hidden. They are quantified. The average cost of a data breach in the legal sector reached $5.72 million in 2025, according to IBM's Cost of a Data Breach Report. Malpractice premiums are rising for firms without AI governance policies. And the reputational damage from a privilege waiver in a high-profile matter is incalculable.
The Heppner ruling did not create a new risk. It made an existing risk undeniable. Every day that a law firm continues to use consumer AI tools without de-identification safeguards is a day of accumulating privilege exposure across every matter those tools touch.
Frequently Asked Questions
What is the Heppner ruling and why does it matter for attorneys using AI?
The Heppner ruling is the February 2026 decision by federal Judge Jed Rakoff in United States v. Heppner, holding that documents generated using consumer AI tools like ChatGPT are not protected by attorney-client privilege. The court found that entering confidential client information into a third-party AI system constitutes voluntary disclosure, which waives privilege under established doctrine. Critically, the court applied subject-matter waiver, meaning privilege loss extends beyond the specific documents entered into the AI to cover the entire subject matter of those communications. Any attorney who has used consumer AI tools with client data faces immediate privilege exposure that must be assessed.
Does the Heppner ruling apply to all AI tools, or only consumer products like ChatGPT?
The ruling specifically addressed consumer AI tools where confidential data is transmitted to third-party servers. The court's analysis focused on the act of disclosure and the absence of a reasonable expectation of confidentiality under the provider's terms of service. AI tools that process data on-device, or platforms like PrivacyFrom.AI that de-identify all personally identifiable information before any data reaches the AI model, would not trigger the same concerns. Enterprise AI tiers with stricter agreements may present a different analysis, though no court has yet ruled on whether enterprise contractual protections alone preserve privilege. The safest approach remains preventing identifiable client data from reaching any third-party server.
Can I still use AI in my legal practice after the Heppner ruling?
Yes. The Heppner ruling does not prohibit AI use — it establishes that using AI carelessly with client data carries serious privilege consequences. Attorneys can continue using AI productively by adopting privacy-first platforms that de-identify data before transmission, limiting AI use to tasks that do not involve privileged information, obtaining informed client consent, and following their state bar's ethics guidance. The key distinction is between using AI with appropriate safeguards and using it in a way that exposes privileged information to third parties. The former is increasingly necessary to remain competitive. The latter is now demonstrably dangerous.
What should I do about documents I already created using ChatGPT with client data?
The impact on previously generated documents depends on the specific circumstances of each matter. If you entered client-identifiable information into a consumer AI tool, conduct a privilege review with ethics counsel immediately. Consider whether subject-matter waiver could apply to related documents beyond those directly generated by the AI, and evaluate whether proactive disclosure to affected clients is appropriate. Review your state bar's guidance on remedial measures. Early voluntary disclosure and remediation generally mitigate disciplinary and malpractice consequences — waiting for opposing counsel or a court to discover the issue is almost always worse.
How does de-identification protect attorney-client privilege when using AI?
De-identification eliminates the third-party disclosure that triggers privilege waiver under the Heppner analysis. A platform like PrivacyFrom.AI automatically replaces all personally identifiable information — names, dates, case numbers, financial figures, and 50+ other entity types — with reversible tokens before any data leaves your device. The AI model receives only anonymized text such as "[PERSON_1] communicated with [PERSON_2] regarding [ORGANIZATION_1]'s financial obligations." Because no privileged information is ever transmitted to the AI provider, there is no third-party disclosure and no basis for privilege waiver. After the AI returns its response, the original identifiers are re-inserted on your local device — addressing the Heppner court's core concern by ensuring no disclosure of identifiable privileged information occurs.
The legal profession cannot afford to treat AI privacy as an open question. The Heppner ruling answered it. The attorneys and firms that act now — by implementing de-identification safeguards, updating their governance policies, and training their teams — will preserve their clients' privilege and their own professional standing. Those who delay are accumulating risk with every prompt.
Start your free trial and see how PrivacyFrom.AI protects attorney-client privilege automatically.