Why Your Client’s AI-Generated Documents Aren’t Privileged (Part 1)

This blog is based on the government’s motion in United States v. Heppner, No. 25-cr-00503-JSR (S.D.N.Y. Feb 6, 2026) (available on PACER) and reports from legal and media outlets confirming that the judge in that case granted the government’s request to treat AI materials generated by the defendant as not privileged.

         Judge Jed Rakoff of the Southern District of New York ruled last week that 31 documents a criminal defendant created using Claude (AI) before his arrest are not protected by the attorney-client privilege or the work-product doctrine.

         The ruling creates serious risks for clients who believe they are being helpful or saving money by discussing their legal matter with AI, and equally serious risks for lawyers who fail to warn their clients of the potential harm that can result from such AI use.

So What Happened?

         In the Heppner case, after learning that he was under investigation, the defendant apparently began running queries through Claude (AI) regarding his legal situation. According to the government’s motion, he did this entirely on his own—defense counsel was not involved. Upon his arrest, law enforcement seized devices containing this AI-generated information. Defense counsel later claimed privilege over them. The government asked the court to deem them not privileged.

The Government’s Three Arguments

In its motion, the government offered three separate reasons why the privilege didn’t apply to the defendant’s prompts or the AI-generated responses from Claude:

1. Claude isn’t a lawyer. It has no law degree, no duty of loyalty, and no professional obligations to any court. The motion relied in part on United States v. Ackert, which emphasizes that privilege protects communications between a client and a lawyer, not a client’s communications with third parties like Claude that are later communicated to the lawyer.

2. Claude disclaims that it gives legal advice. The government argued that the defendant could not claim he was seeking legal advice from a tool that explicitly disclaims that it gives legal advice.  At the time, Claude’s Constitution expressly stated that it does not provide concrete legal advice and instead encourages users to consult an attorney if they appear to be seeking such advice.

3. The information isn’t confidential. Anthropic’s Privacy Policy at the time explained that it collects prompts and outputs, uses data to train the model, and may disclose information to government authorities and other third parties. The government cited case law to argue that users have limited privacy expectations in AI chats when the provider retains the data in the ordinary course of business and does not keep it confidential, as Claude told its users was its policy.

You Can’t Fix It After the Fact

         Defense counsel reportedly argued that even if the documents weren’t privileged when the defendant created them, they became privileged once he shared them with his lawyers. According to news reports, the judge rejected that argument.

         The work-product claim failed as well. Work product protection generally applies only to materials prepared by or at the direction of counsel. According to the government’s motion, the defense counsel claimed that the defendant’s research was done for the “express purpose of talking to counsel” and obtaining his counsel’s legal advice. Still, they admitted that he had done so on his own, not at their request or direction.

         The government’s motion suggests that there may be work-product protections if counsel had directed the defendant to undertake the AI research into his case. Still, it’s hard to imagine a scenario in which a lawyer would direct a client to enter confidential and privileged case information into a commercial AI tool like the one used here.

For Lawyers: This Shouldn’t Be News

ABA Formal Opinion 512, issued in July 2024, told us that putting client confidential information into self-learning generative AI tools breaches a lawyer’s duty of confidentiality under ABA Model Rule 1.6 absent “informed consent” from the client.

         Just as Opinion 512 says lawyers should not use self-learning AI tools with client information because they’re third parties that may disclose that information to others, this ruling says clients can’t use self-learning AI tools with case information and maintain the privilege: different legal doctrines (confidentiality vs. privilege), but the same fundamental problem.

The AI tool is a third party, and the information shared is not kept confidential. When used by a legal client to ask about or research their case, it destroys the privilege just as it would if they posted questions about their case on Facebook or forwarded their lawyer’s emails to their family members.

The Bottom Line

         The court’s ruling in Heppner doesn’t change privilege law; it just applies old rules to new tech. For now, the most practical and conservative approach is to treat client-generated AI prompts and results as nonprivileged and to advise clients not to use commercial AI tools to research their legal matters.

Up Next: Part 2 of this blog will set out some practical ways to protect your practice—and your client’s case, in light of this ruling.