Why Your Client’s AI-Generated Documents Aren’t Privileged (Part 1)

This blog is based on the government’s motion in United States v. Heppner, No. 25-cr-00503-JSR (S.D.N.Y. Feb 6, 2026) (available on PACER) and reports from legal and media outlets confirming that the judge in that case granted the government’s request to treat AI materials generated by the defendant as not privileged.

         Judge Jed Rakoff of the Southern District of New York ruled last week that 31 documents a criminal defendant created using Claude (AI) before his arrest are not protected by the attorney-client privilege or the work-product doctrine.

         The ruling creates serious risks for clients who believe they are being helpful or saving money by discussing their legal matter with AI, and equally serious risks for lawyers who fail to warn their clients of the potential harm that can result from such AI use.

So What Happened?

         In the Heppner case, after learning that he was under investigation, the defendant apparently began running queries through Claude (AI) regarding his legal situation. According to the government’s motion, he did this entirely on his own—defense counsel was not involved. Upon his arrest, law enforcement seized devices containing this AI-generated information. Defense counsel later claimed privilege over them. The government asked the court to deem them not privileged.

The Government’s Three Arguments

In its motion, the government offered three separate reasons why the privilege didn’t apply to the defendant’s prompts or the AI-generated responses from Claude:

1. Claude isn’t a lawyer. It has no law degree, no duty of loyalty, and no professional obligations to any court. The motion relied in part on United States v. Ackert, which emphasizes that privilege protects communications between a client and a lawyer, not a client’s communications with third parties like Claude that are later communicated to the lawyer.

2. Claude disclaims that it gives legal advice. The government argued that the defendant could not claim he was seeking legal advice from a tool that explicitly disclaims that it gives legal advice.  At the time, Claude’s Constitution expressly stated that it does not provide concrete legal advice and instead encourages users to consult an attorney if they appear to be seeking such advice.

3. The information isn’t confidential. Anthropic’s Privacy Policy at the time explained that it collects prompts and outputs, uses data to train the model, and may disclose information to government authorities and other third parties. The government cited case law to argue that users have limited privacy expectations in AI chats when the provider retains the data in the ordinary course of business and does not keep it confidential, as Claude told its users was its policy.

You Can’t Fix It After the Fact

         Defense counsel reportedly argued that even if the documents weren’t privileged when the defendant created them, they became privileged once he shared them with his lawyers. According to news reports, the judge rejected that argument.

         The work-product claim failed as well. Work product protection generally applies only to materials prepared by or at the direction of counsel. According to the government’s motion, the defense counsel claimed that the defendant’s research was done for the “express purpose of talking to counsel” and obtaining his counsel’s legal advice. Still, they admitted that he had done so on his own, not at their request or direction.

         The government’s motion suggests that there may be work-product protections if counsel had directed the defendant to undertake the AI research into his case. Still, it’s hard to imagine a scenario in which a lawyer would direct a client to enter confidential and privileged case information into a commercial AI tool like the one used here.

For Lawyers: This Shouldn’t Be News

ABA Formal Opinion 512, issued in July 2024, told us that putting client confidential information into self-learning generative AI tools breaches a lawyer’s duty of confidentiality under ABA Model Rule 1.6 absent “informed consent” from the client.

         Just as Opinion 512 says lawyers should not use self-learning AI tools with client information because they’re third parties that may disclose that information to others, this ruling says clients can’t use self-learning AI tools with case information and maintain the privilege: different legal doctrines (confidentiality vs. privilege), but the same fundamental problem.

The AI tool is a third party, and the information shared is not kept confidential. When used by a legal client to ask about or research their case, it destroys the privilege just as it would if they posted questions about their case on Facebook or forwarded their lawyer’s emails to their family members.

The Bottom Line

         The court’s ruling in Heppner doesn’t change privilege law; it just applies old rules to new tech. For now, the most practical and conservative approach is to treat client-generated AI prompts and results as nonprivileged and to advise clients not to use commercial AI tools to research their legal matters.

Up Next: Part 2 of this blog will set out some practical ways to protect your practice—and your client’s case, in light of this ruling.

Federal Judge Rules Client’s Use of AI Waives the Privilege: What our clients need to know

In light of this week’s ruling in United States v. Heppner, No. 25-cr-00503-JSR (S.D.N.Y. Feb. 6, 2026), I am adding the language below to my Firm’s fee agreement. In the Heppner case, Judge Jed Rakoff of the Southern District of New York ruled that 31 documents the defendant created using Claude (AI) before his arrest—and later gave to his attorneys—are not protected by attorney-client privilege or work product doctrine.

The docket entry in the case is sparse—just a text order granting the government’s motion without specifying which of the pending motions it applies to. I am working from the government’s motion—available on PACER—and reporting from legal publications confirming that the judge granted the motion to deem the AI materials not privileged.

Here is what Hunt Huey PLLC is going to tell our clients so that they are informed about this risk:

Client Use of Artificial Intelligence Tools

Our communications with you are protected by the attorney-client privilege and our work on your behalf is protected by the attorney work-product doctrine. That means that others, including opponents in a lawsuit and their attorneys, are not entitled to ask for or receive this kind of information.

The attorney-client privilege can be waived by giving third parties information about your communications with us or our work on your behalf. One federal judge has held that ChatGPT, Claude, Gemini and other AI tools are “third parties.” If you enter information about your case into such tools, you may waive the attorney-client privilege and the attorney work-product protection. In addition, any information you get back from these tools is not protected by the privilege, and others, including opposing parties and their lawyers, may be able to require that you give them both the questions you asked and the answers you got. In addition, deleting AI-generated material about your case could constitute spoliation of evidence, which may result in sanctions against you.

For these reasons you agree that you will not use any AI tools like ChatGPT, Claude, or Gemini to research or analyze anything related to your case without talking to us first.

You also understand that we cannot accept or use AI-generated legal research created by you. Our fees reflect the work we must perform to represent you competently in accordance with our professional standards. Client-generated AI content cannot replace this work and may harm your case if the privilege is lost.

If you want to use AI for anything case-related, ask us first. We’ll explain the risks and see if there’s a safe way to proceed.

By signing below, you agree: (1) that you will not use AI tools for case-related matters without consulting us, (2) that you will tell us immediately if you’ve already used AI tools about this case, and (3) you understand that AI content you create independently cannot reduce our fees or substitute for our legal work.

This restriction applies to the entire scope of our representation.

Client Initials: ________

Date:________

No Shortcuts: Why Fundamentals Still Win in Law

My latest blog takes a hard look at recent cases where lawyers skipped the basics, and why “no shortcuts” isn’t just a football mantra — it’s a professional requirement.

By Jeanne M. Huey

The Basics Are the Story

I could have headlined this “Is This Still Happening?” But we’ve been asking that for over two years, and the answer is still “yes.” By “this,” I mean lawyers using AI for legal research — and not bothering to verify the results. And not at the margins — we’re talking about big international firms that have every research tool, policy, training program, and resource that money can buy.

Nobody expects us to be perfect all of the time. We’ve all fumbled — accidentally misquoted a case, misspelled a cite, or leaned on a holding that later got picked apart. Those are part of the game, and when they are caught and fixed, there is no penalty. But using AI-generated citations without checking them? That’s not a misstep — that’s running a hook and ladder on the very first play.

When Fundamentals Get Skipped

Last month, in Johnson v. Dunn, three Butler Snow LLP lawyers were sanctioned after one of them admitted to using unverified AI-generated citations in court filings. Until early 2024, this lawyer’s AI use had been purely personal —finding vacation spots, looking up fitness info, even researching colleges for his kids.

He told the Judge that he knew the firm’s AI policy but did not comply with it. How a tool he had only used casually for travel tips became a substitute for the firm’s paid legal research services — which cost thousands each month for a reason — is hard to understand. As a partner, he could have asked any paralegal, associate, or staff person to pull the cases and highlight the relevant passages for his review; reviewing them would have taken minutes. The only charitable explanation is that he thought AI was a legitimate legal research platform. If that’s true, this case shows that when it comes to the use of technology in the practice of law, even strong institutional safeguards mean little without constant education and personal accountability.

The Brief Bank

At the show-cause hearing in the case, another Butler Snow lawyer explained why he hadn’t checked the citations added by his colleague. His reasoning: Many of their cases involve the same law and precedent, so his team often pulls citations from older briefs and plugs them into briefs on the same topics. The suggestion was that this eliminated the need to rely on AI and was safer because they had used those cases before.

The Judge felt differently, writing that this practice was one of three factors that  “deepened rather than allayed” her concerns. 

The problem with this practice? Past use isn’t proof of present accuracy. Laws change. Precedent gets overturned—context matters. 

This same concern applies to lawyers who borrow motions or briefs from other lawyers — or who post to a listserv asking for forms or legal advice from other lawyers that they plan to file as their own. Whether the information comes from your firm’s file cabinet, another lawyer’s Dropbox, or an AI chatbot, if you haven’t verified it for the case at hand, you’re taking a shortcut you can’t defend.

Cite & Highlight: A Run Up the Middle

Here’s an easy solution: require every case cited to be attached in an appendix with the relevant quote or holding highlighted. I call it “Cite & Highlight.” Courts could adopt it tomorrow. Firms could implement it today. The time it takes is nothing compared to the reputational cost of getting burned by a bad citation.

In Johnson v. Dunn, the partner who dropped AI-generated citations into a filing without review failed to make the reasonable inquiry Rule 11 requires. In Lacey v. State Farm (C.D. Cal. May 5, 2025), several large national firms made the same mistake by using AI — this time with 27 citations, nine wrong and at least two nonexistent. Whether the bad law comes from AI, an old brief, or a motion written by another lawyer, the problem is the same: If your name is going on it, you’re skipping the work. And in law, like football, there’s no substitute for work.

Flash and Fundamentals

Oregon football has a reputation for flash — the best uniforms, the newest tech, the gleaming facilities (thanks, Uncle Phil). But that’s not why they win.

Coach Lanning tells his players to do the work every single day and ask themselves: “How can I improve?” To own mistakes. To learn from people who do it better. To have the patience to get it right, because fast and wrong is still wrong. That lesson fits the courtroom as well as it does the gridiron.

The ethics rules don’t demand excellence — they set the baseline for lawyer ethics. But clients don’t hire us for the baseline. They pay us to prepare, check, and deliver our best. Yes it is hard, but if it was easy, anyone could do it. The practice of law requires intense work done meticulously every time.

Senior lawyers: model that standard. Young lawyers: don’t let pressure for speed push you into skipping steps. Stop feeding the myth that good work can be done in a flash. Shortcuts might buy you a little time today, but they undermine your work product, your discipline, and your success in the long run.

Law is not a job — it’s a profession. Every filing, every argument, every case reflects our expertise and pride in our work. Wins — in football or in law — are constructed day by day, decision by decision. And there are no shortcuts.

Go Ducks.

Oops, I Did It Again: Lawyers Rely on AI at Your Peril

Lawyers continue to be misled by AI-generated case law that appears genuine but isn’t. This post is about a personal experience and why lawyers can’t afford to stop thinking.

I Gave GAI Clear Instructions: It Still Lied

A few weeks ago, for fun, I asked the GAI program I use to look on the internet and see if there was a quote on a specified topic from a “Founding Father”. Within seconds, it provided me with an on-point quote, which was attributed to John Adams, accompanied by a hyperlinked citation. It was the best party trick ever–until it wasn’t. Because the quote didn’t exist. Anywhere. When I called it out, GAI replied: “It sounds like something John Adams would say.”

Yesterday, I tested it again.

I asked for it to find the rule for a certain proposition. A rule of civil procedure that I knew existed. It told me the rule didn’t exist. I wanted to see if it would correct itself, so asked it to back that up with a case and a link to the statute. It did—with confidence. It even provided a quote from the case that it said supported the position it had taken. Except it was still wrong–the rule did exist and it had simply made up the quote.

When I pointed out the error and asked how this had happened, GAI explained:

I incorrectly generalized and answered based on a commonly followed general rule.”

Mind you, I had given it specific, detailed instructions and prompts—things I had learned from CLE courses and articles about how to use AI and get accurate outputs. These included telling it not to make anything up, to double-check sources, and to provide links to public, official sources for every “fact” it retrieved from the internet.

What I got was a lie, wrapped in a polished, confident tone, dressed up like a real legal citation—because GAI is built to give me what I want and to sound persuasive and helpful, even when it’s dead wrong.

Lawyers’ Misuse of AI Continues to Make Headlines

Different courts, different lawyers, but the failure is identical: If you don’t read the case, the court will—and then you’ll make the news. Here is a partial list of headlines just from the past few weeks–hyperlinked to their source:

May 14, 2025 AI Hallucinations Strike Again: Two More Cases Where Lawyers Face Judicial Wrath for Fake Citations 

May 21, 2025, Judge Considers Sanctions Against Attorneys in Prison Case for Using AI in Court Filings

May 29, 2025, Lawyer Sanctioned $6,000 for AI-Generated Fake Legal Citations.

May 30, 2025, Southern District of Florida Sanctions Lawyers for Submission of AI Hallucinated Caselaw

May 31, 2025, US Lawyer Sanctioned After Being Caught Using ChatGPT for Court Brief

This Should Not be News to Most of Us

The problem of overworked lawyers attempting to take shortcuts is not new. Only the method has changed. For decades, lawyers have been getting sanctioned or called out by opposing counsel for:

  • Using the headnote from a paid online legal research tool as a “quote” without reading the opinion to confirm it.
  • Copying a pleading from a prior case and filing it without checking if the law still applies.
  • Lifting a motion from a CLE binder, online research tool, or lawyer listserv conversation and passing it off as their own.
  • Using the analysis from someone else’s case within the firm, without knowing or understanding the facts, court, or procedural history of that case. 

Every one of these examples has the same flaw: the lawyer wanted a way to circumvent doing the work we get paid to do i.e. think

The Real Problem Isn’t AI

AI isn’t the problem. It’s just the newest version of a long-standing temptation: to find a shortcut. Something to save time, make us look smart, or help us meet a deadline when the work hasn’t been done.

If you’re feeling pressure to use AI—or to do things faster, cheaper, or “more efficiently” than ever before—hear this:

You get paid to think, and no technology can replace your judgment or experience.

Your speed or formatting skills don’t determine your value. You are trained to analyze, reason, and argue. Your value lies in how you perceive what matters, identify what’s missing, and determine what it will take to achieve your client’s goals. You can’t delegate that to a machine just like you can’t outsource that to someone else’s pleading or form.

And don’t let fear push you to use a tool you don’t understand. Stop. Breathe. Learn what it can do. Learn what it can’t. Use it wisely—don’t rely on it to think for you, and don’t believe it when it assures you that it has. 

For Judges and Supervisors: A Fix Worth Considering

To stop this problem from recurring, consider this simple fix:

Require every pleading filed with the court that contains a reference, cite, or quotation to any authority to be internally hyperlinked to an attached appendix that includes a copy of the source with the relevant rule, holding, or quote highlighted for the court’s convenience.

This should become standard, just like a certificate of service. Lawyers should also apply this requirement to the work of those they supervise. And no, the clients should not pay for this “extra” work; it overhead–the price of doing business in the era of AI.

The Technology Changed, the Job Didn’t

This isn’t about shaming lawyers. It’s about reminding us who we are.

We are not prompt engineers or data processors. We are professionals who took an oath and have duties to our clients, the courts, and the public.

So please, don’t be a headline. 

Read the case. Check the quotes. Confirm the law is still good. And don’t rely on any tool that doesn’t distinguish between the truth and a lie.

Generative AI for Lawyers Part 3: Ethical Use of Hypotheticals with GAI

By Jeanne M. Huey

In the previous entry in this series, we discussed ABA Formal Opinion 512’s admonition against inputting any information relating to a client’s representation (confidential information under ABA Model Rule 1.6) into self-learning generative artificial intelligence (GAI) software without first obtaining the client’s “informed consent”.

Consider, however, that there are a variety of ways to use self-learning GAI without either disclosing any client confidential information or obtaining the client’s “informed consent.” These methods allow you to reap the benefits of self-learning GAI for your clients while maintaining your ethical obligations under the relevant rules and opinions.

Ethical Use of Hypotheticals in Case Discussions

ABA Model Rule 1.6 Comment 4 advises that using a hypothetical to “discuss issues relating to the representation is permissible so long as there is no reasonable likelihood that the listener will be able to ascertain the identity of the client or the situation involved.” In any public-facing discussion, these prohibitions are particularly relevant, because audience members can easily connect the dots from contextual information in hypotheticals to learn who the client is or simply look up the speaker’s cases online to get the same information. This makes it nearly impossible to say that there is “no reasonable likelihood” that client confidential information will be disclosed when lawyers use hypotheticals based on their clients and cases in situations where the lawyer’s identity is known.

ABA Formal Opinion 480 (public comments by lawyers such as blog posts and social media) and ABA Formal Opinion 511R (lawyers discussing cases on lawyer forums such as LISTSERV) take this principle further. Both opinions warn against the use of hypotheticals in any public context without the client’s informed consent when there’s a possibility that third parties could deduce the identity or specific details of a client’s situation.

The combined guidance from these ethics opinions and related rules establish a high bar for confidentiality in public legal commentary and make it clear that clever wordsmithing and linguistic acrobatics alone (i.e., using hypotheticals) will not insulate lawyers from the risk of liability for a breach of the rules regarding confidentiality in these kinds of settings.

Advanced Strategies for Securely Using Hypotheticals with GAI

While any use of a hypothetical to discuss a client or case requires vigilance regarding client confidentiality, GAI may offer a more protective environment for the use of a well-written hypothetical.

In contrast to using hypotheticals in public discussions or commentary, using a hypothetical in a GAI prompt significantly reduces the likelihood of associating a scenario with a specific client due to GAI’s inherent anonymity; there is no identifiable author or audience involved. This allows attorneys to use hypotheticals to effectively analyze complex legal issues while safeguarding client confidentiality and to do so without seeking informed consent from the client. This distinction aligns with ABA guidelines on confidentiality, which emphasize that the risk of disclosure varies depending on the audience and platform used.

Consider the following approaches:

1.     Abstract the Core Legal Issue

When crafting hypotheticals, distill the client’s situation to its fundamental legal principles, omitting any specifics that could reveal their identity—such as names, dates, financial figures, or unique circumstances. Focus on the legal doctrines or statutory interpretations at play rather than granular details.

Example:

Instead of asking about a “global pharmaceutical company facing allegations of off-label marketing in violation of FDA regulations,” reframe the inquiry to address “a corporation navigating regulatory compliance challenges in the context of strict federal oversight.” This allows you to explore the complexities of regulatory compliance without disclosing identifiable information.

2.     Emphasize Legal Theories and Precedents

Center your inquiries on broader legal theories, jurisprudence, or procedural rules. You can gain insights without tying the discussion to specific client facts by focusing on legal frameworks and landmark cases.

Example:

Rather than delving into the specifics of a complex international arbitration your client is involved in, you might ask, “How have recent court decisions impacted the enforcement of arbitral awards under the New York Convention?” This approach examines critical legal interpretations without disclosing client involvement.

3.     Frame Questions Around Legal Processes and Strategies

Pose questions that concentrate on legal processes, litigation strategies, or best practices rather than specific factual scenarios. This enables you to gather strategic insights while maintaining confidentiality.

Example:

If advising a client on mitigating risks in cross-border mergers, you could ask, “What are effective due diligence strategies for uncovering potential liabilities in international M&A transactions?” This allows exploration of procedural tactics without referencing the client’s situation.

4.    Use Analogous Legal Contexts

Construct hypotheticals using different contexts or industries that parallel your client’s issues. By transferring the legal problem to a comparable but distinct setting, you can examine relevant principles while further obscuring client details.

Example:

If your client is an energy company facing environmental litigation over alleged contamination, you might frame the hypothetical around a manufacturing company dealing with toxic-tort claims. The legal principles regarding environmental liability and defense strategies remain pertinent, but the context shift protects client confidentiality.

By employing these techniques, you can effectively leverage GAI tools to explore complex legal issues while rigorously safeguarding client confidentiality.

© 2024 by the American Bar Association. Reproduced with permission. All rights reserved. This information or any portion thereof may not be copied or disseminated in any form or by any means or stored in an electronic database or retrieval system without the express written consent of the American Bar Association.

Generative AI for Lawyers Part 1: Competence, Professionalism, and Risks

By Jeanne M. Huey

 © 2024 by the American Bar Association. Reproduced with permission. All rights reserved. This information or any portion thereof may not be copied or disseminated in any form or by any means or stored in an electronic database or retrieval system without the express written consent of the American Bar Association. 

In today’s fast-paced legal world, lawyers face a critical challenge: how to maintain competence amidst rapidly evolving technology—specifically—generative AI (GAI) tools. The American Bar Association’s Formal Ethics Opinion 512 sheds light on this important issue, guiding us on the path to staying ethical while navigating new technologies that help us to better serve our clients. Competence for lawyers in 2024 involves more than just knowledge of the law. Under ABA Model Rule 1.1, lawyers must provide competent representation, which includes having the legal knowledge, skill, thoroughness, and preparation reasonably necessary for their work. Under Comment 8 to Rule 1.1, this also encompasses an understanding of the technology that lawyers use in their practice. Importantly, competence in the tech that lawyers use doesn’t require every lawyer to become a tech expert or AI specialist. As Opinion 512 reminds us, however, it is not enough to simply hire someone else who does know about the risks and benefits of that technology. Lawyers must have a “reasonable understanding” of the capabilities and limitations of the specific technology that they use. That includes GAI. Lawyers can meet this standard either by acquiring a reasonable understanding of the benefits and risks of the GAI tools on their own, or they can draw on the expertise of others who can provide them with guidance about the relevant technology’s capabilities and limitations. Put another way, remaining ignorant about the technology used in their law practice—such as GAI—is not an option. And, of course, this isn’t a one-time task, because technology, particularly GAI, is evolving rapidly and staying competent means keeping pace with these advancements.

Understanding the Risks of Generative AI—What Does GAI Have to Say about It? When asked about the number one risk to lawyers who use GAI for legal work, the GAI program I use (ChatGPT) told me: 

The number one risk associated with lawyers using ChatGPT is providing inaccurate or misleading legal advice, often due to the AI’s limitationsin understanding legal context and nuances.

[Emphasis added.] We have all heard about lawyers who blindly took flawed or completely unfounded legal analysis or “pretend” caselaw generated by GAI and plugged it into a pleading and filed it with the court. One would think by now that all lawyers would understand that GAI is not a substitute for actual legal work and analysis—the work that lawyers are trained in and get paid to provide to their clients. Nonetheless, at least a few attorneys have recently been referred to their local disciplinary authority for citing a nonexistent case generated by ChatGPT in a legal brief, which the court found violated Federal Rule of Civil Procedure 11 and amounted to the submission of false statement to the court. It also likely violated the attorneys’ duty of candor to the tribunal under ABA Model Rule 3.3 or its equivalent.

The GAI program then went on to remind me:

AI tools can generate convincing-sounding responses that might seem factually or legally correct but can include factual inaccuracies, outdated information, or what’s known as “AI hallucinations”—where the AI confidently produces false or fabricated information.

This statement is true—and is part of the allure of using GAI. The “convincing-sounding” responses are so tempting to simply cut and paste. Consider, however, that soon—if it has not already happened—everyone in the legal system (i.e. judges, lawyers, paraprofessionals, clients, and professors) will be able to recognize the difference between GAI-generated text and actual legal analysis and argument written by a skilled, smart, inciteful, and intelligent lawyer. When that happens, anyone who chooses to continue to use lightly edited GAI text even in everyday correspondence will lose credibility with their colleagues and the courts in which they practice.

Competence in the AI Age

So, what does competence in this AI-driven era require? First and foremost, it involves independent verification of AI output. Simply copy-pasting what the tool produces and sending it off to a client or court is not enough. And trying to mask your AI-generated text by running it through a program that “humanizes” it is not a solution. Try it once and you will see why. There are no shortcuts when it comes to the actual practice of law. Lawyers must apply their legal knowledge and judgment to review the AI’s work and modify it accordingly.

The degree of verification needed will depend on the task. But never forget that clients usually face complex, emotional, and high-stakes situations, and they rely on lawyers for more than just legal knowledge—they need understanding, legal guidance, strategic thinking, and human empathy. And they deserve (and lawyers get paid for) far more than just the output of a machine.

No amount of technology can substitute for the tailored advice that comes from years of training, real-world experience, and the trust built through personal relationships. As technology reshapes how lawyers work, understanding how to navigate these changes ethically and responsibly is vital for keeping clients happy, maintaining competence, and complying with our rules of professional conduct.