Why Your Client’s AI-Generated Documents Aren’t Privileged (Part 1)

This blog is based on the government’s motion in United States v. Heppner, No. 25-cr-00503-JSR (S.D.N.Y. Feb 6, 2026) (available on PACER) and reports from legal and media outlets confirming that the judge in that case granted the government’s request to treat AI materials generated by the defendant as not privileged.

         Judge Jed Rakoff of the Southern District of New York ruled last week that 31 documents a criminal defendant created using Claude (AI) before his arrest are not protected by the attorney-client privilege or the work-product doctrine.

         The ruling creates serious risks for clients who believe they are being helpful or saving money by discussing their legal matter with AI, and equally serious risks for lawyers who fail to warn their clients of the potential harm that can result from such AI use.

So What Happened?

         In the Heppner case, after learning that he was under investigation, the defendant apparently began running queries through Claude (AI) regarding his legal situation. According to the government’s motion, he did this entirely on his own—defense counsel was not involved. Upon his arrest, law enforcement seized devices containing this AI-generated information. Defense counsel later claimed privilege over them. The government asked the court to deem them not privileged.

The Government’s Three Arguments

In its motion, the government offered three separate reasons why the privilege didn’t apply to the defendant’s prompts or the AI-generated responses from Claude:

1. Claude isn’t a lawyer. It has no law degree, no duty of loyalty, and no professional obligations to any court. The motion relied in part on United States v. Ackert, which emphasizes that privilege protects communications between a client and a lawyer, not a client’s communications with third parties like Claude that are later communicated to the lawyer.

2. Claude disclaims that it gives legal advice. The government argued that the defendant could not claim he was seeking legal advice from a tool that explicitly disclaims that it gives legal advice.  At the time, Claude’s Constitution expressly stated that it does not provide concrete legal advice and instead encourages users to consult an attorney if they appear to be seeking such advice.

3. The information isn’t confidential. Anthropic’s Privacy Policy at the time explained that it collects prompts and outputs, uses data to train the model, and may disclose information to government authorities and other third parties. The government cited case law to argue that users have limited privacy expectations in AI chats when the provider retains the data in the ordinary course of business and does not keep it confidential, as Claude told its users was its policy.

You Can’t Fix It After the Fact

         Defense counsel reportedly argued that even if the documents weren’t privileged when the defendant created them, they became privileged once he shared them with his lawyers. According to news reports, the judge rejected that argument.

         The work-product claim failed as well. Work product protection generally applies only to materials prepared by or at the direction of counsel. According to the government’s motion, the defense counsel claimed that the defendant’s research was done for the “express purpose of talking to counsel” and obtaining his counsel’s legal advice. Still, they admitted that he had done so on his own, not at their request or direction.

         The government’s motion suggests that there may be work-product protections if counsel had directed the defendant to undertake the AI research into his case. Still, it’s hard to imagine a scenario in which a lawyer would direct a client to enter confidential and privileged case information into a commercial AI tool like the one used here.

For Lawyers: This Shouldn’t Be News

ABA Formal Opinion 512, issued in July 2024, told us that putting client confidential information into self-learning generative AI tools breaches a lawyer’s duty of confidentiality under ABA Model Rule 1.6 absent “informed consent” from the client.

         Just as Opinion 512 says lawyers should not use self-learning AI tools with client information because they’re third parties that may disclose that information to others, this ruling says clients can’t use self-learning AI tools with case information and maintain the privilege: different legal doctrines (confidentiality vs. privilege), but the same fundamental problem.

The AI tool is a third party, and the information shared is not kept confidential. When used by a legal client to ask about or research their case, it destroys the privilege just as it would if they posted questions about their case on Facebook or forwarded their lawyer’s emails to their family members.

The Bottom Line

         The court’s ruling in Heppner doesn’t change privilege law; it just applies old rules to new tech. For now, the most practical and conservative approach is to treat client-generated AI prompts and results as nonprivileged and to advise clients not to use commercial AI tools to research their legal matters.

Up Next: Part 2 of this blog will set out some practical ways to protect your practice—and your client’s case, in light of this ruling.

Federal Judge Rules Client’s Use of AI Waives the Privilege: What our clients need to know

In light of this week’s ruling in United States v. Heppner, No. 25-cr-00503-JSR (S.D.N.Y. Feb. 6, 2026), I am adding the language below to my Firm’s fee agreement. In the Heppner case, Judge Jed Rakoff of the Southern District of New York ruled that 31 documents the defendant created using Claude (AI) before his arrest—and later gave to his attorneys—are not protected by attorney-client privilege or work product doctrine.

The docket entry in the case is sparse—just a text order granting the government’s motion without specifying which of the pending motions it applies to. I am working from the government’s motion—available on PACER—and reporting from legal publications confirming that the judge granted the motion to deem the AI materials not privileged.

Here is what Hunt Huey PLLC is going to tell our clients so that they are informed about this risk:

Client Use of Artificial Intelligence Tools

Our communications with you are protected by the attorney-client privilege and our work on your behalf is protected by the attorney work-product doctrine. That means that others, including opponents in a lawsuit and their attorneys, are not entitled to ask for or receive this kind of information.

The attorney-client privilege can be waived by giving third parties information about your communications with us or our work on your behalf. One federal judge has held that ChatGPT, Claude, Gemini and other AI tools are “third parties.” If you enter information about your case into such tools, you may waive the attorney-client privilege and the attorney work-product protection. In addition, any information you get back from these tools is not protected by the privilege, and others, including opposing parties and their lawyers, may be able to require that you give them both the questions you asked and the answers you got. In addition, deleting AI-generated material about your case could constitute spoliation of evidence, which may result in sanctions against you.

For these reasons you agree that you will not use any AI tools like ChatGPT, Claude, or Gemini to research or analyze anything related to your case without talking to us first.

You also understand that we cannot accept or use AI-generated legal research created by you. Our fees reflect the work we must perform to represent you competently in accordance with our professional standards. Client-generated AI content cannot replace this work and may harm your case if the privilege is lost.

If you want to use AI for anything case-related, ask us first. We’ll explain the risks and see if there’s a safe way to proceed.

By signing below, you agree: (1) that you will not use AI tools for case-related matters without consulting us, (2) that you will tell us immediately if you’ve already used AI tools about this case, and (3) you understand that AI content you create independently cannot reduce our fees or substitute for our legal work.

This restriction applies to the entire scope of our representation.

Client Initials: ________

Date:________

Oops, I Did It Again: Lawyers Rely on AI at Your Peril

Lawyers continue to be misled by AI-generated case law that appears genuine but isn’t. This post is about a personal experience and why lawyers can’t afford to stop thinking.

I Gave GAI Clear Instructions: It Still Lied

A few weeks ago, for fun, I asked the GAI program I use to look on the internet and see if there was a quote on a specified topic from a “Founding Father”. Within seconds, it provided me with an on-point quote, which was attributed to John Adams, accompanied by a hyperlinked citation. It was the best party trick ever–until it wasn’t. Because the quote didn’t exist. Anywhere. When I called it out, GAI replied: “It sounds like something John Adams would say.”

Yesterday, I tested it again.

I asked for it to find the rule for a certain proposition. A rule of civil procedure that I knew existed. It told me the rule didn’t exist. I wanted to see if it would correct itself, so asked it to back that up with a case and a link to the statute. It did—with confidence. It even provided a quote from the case that it said supported the position it had taken. Except it was still wrong–the rule did exist and it had simply made up the quote.

When I pointed out the error and asked how this had happened, GAI explained:

I incorrectly generalized and answered based on a commonly followed general rule.”

Mind you, I had given it specific, detailed instructions and prompts—things I had learned from CLE courses and articles about how to use AI and get accurate outputs. These included telling it not to make anything up, to double-check sources, and to provide links to public, official sources for every “fact” it retrieved from the internet.

What I got was a lie, wrapped in a polished, confident tone, dressed up like a real legal citation—because GAI is built to give me what I want and to sound persuasive and helpful, even when it’s dead wrong.

Lawyers’ Misuse of AI Continues to Make Headlines

Different courts, different lawyers, but the failure is identical: If you don’t read the case, the court will—and then you’ll make the news. Here is a partial list of headlines just from the past few weeks–hyperlinked to their source:

May 14, 2025 AI Hallucinations Strike Again: Two More Cases Where Lawyers Face Judicial Wrath for Fake Citations 

May 21, 2025, Judge Considers Sanctions Against Attorneys in Prison Case for Using AI in Court Filings

May 29, 2025, Lawyer Sanctioned $6,000 for AI-Generated Fake Legal Citations.

May 30, 2025, Southern District of Florida Sanctions Lawyers for Submission of AI Hallucinated Caselaw

May 31, 2025, US Lawyer Sanctioned After Being Caught Using ChatGPT for Court Brief

This Should Not be News to Most of Us

The problem of overworked lawyers attempting to take shortcuts is not new. Only the method has changed. For decades, lawyers have been getting sanctioned or called out by opposing counsel for:

  • Using the headnote from a paid online legal research tool as a “quote” without reading the opinion to confirm it.
  • Copying a pleading from a prior case and filing it without checking if the law still applies.
  • Lifting a motion from a CLE binder, online research tool, or lawyer listserv conversation and passing it off as their own.
  • Using the analysis from someone else’s case within the firm, without knowing or understanding the facts, court, or procedural history of that case. 

Every one of these examples has the same flaw: the lawyer wanted a way to circumvent doing the work we get paid to do i.e. think

The Real Problem Isn’t AI

AI isn’t the problem. It’s just the newest version of a long-standing temptation: to find a shortcut. Something to save time, make us look smart, or help us meet a deadline when the work hasn’t been done.

If you’re feeling pressure to use AI—or to do things faster, cheaper, or “more efficiently” than ever before—hear this:

You get paid to think, and no technology can replace your judgment or experience.

Your speed or formatting skills don’t determine your value. You are trained to analyze, reason, and argue. Your value lies in how you perceive what matters, identify what’s missing, and determine what it will take to achieve your client’s goals. You can’t delegate that to a machine just like you can’t outsource that to someone else’s pleading or form.

And don’t let fear push you to use a tool you don’t understand. Stop. Breathe. Learn what it can do. Learn what it can’t. Use it wisely—don’t rely on it to think for you, and don’t believe it when it assures you that it has. 

For Judges and Supervisors: A Fix Worth Considering

To stop this problem from recurring, consider this simple fix:

Require every pleading filed with the court that contains a reference, cite, or quotation to any authority to be internally hyperlinked to an attached appendix that includes a copy of the source with the relevant rule, holding, or quote highlighted for the court’s convenience.

This should become standard, just like a certificate of service. Lawyers should also apply this requirement to the work of those they supervise. And no, the clients should not pay for this “extra” work; it overhead–the price of doing business in the era of AI.

The Technology Changed, the Job Didn’t

This isn’t about shaming lawyers. It’s about reminding us who we are.

We are not prompt engineers or data processors. We are professionals who took an oath and have duties to our clients, the courts, and the public.

So please, don’t be a headline. 

Read the case. Check the quotes. Confirm the law is still good. And don’t rely on any tool that doesn’t distinguish between the truth and a lie.