Oops, I Did It Again: Lawyers Rely on AI at Your Peril

Lawyers continue to be misled by AI-generated case law that appears genuine but isn’t. This post is about a personal experience and why lawyers can’t afford to stop thinking.

I Gave GAI Clear Instructions: It Still Lied

A few weeks ago, for fun, I asked the GAI program I use to look on the internet and see if there was a quote on a specified topic from a “Founding Father”. Within seconds, it provided me with an on-point quote, which was attributed to John Adams, accompanied by a hyperlinked citation. It was the best party trick ever–until it wasn’t. Because the quote didn’t exist. Anywhere. When I called it out, GAI replied: “It sounds like something John Adams would say.”

Yesterday, I tested it again.

I asked for it to find the rule for a certain proposition. A rule of civil procedure that I knew existed. It told me the rule didn’t exist. I wanted to see if it would correct itself, so asked it to back that up with a case and a link to the statute. It did—with confidence. It even provided a quote from the case that it said supported the position it had taken. Except it was still wrong–the rule did exist and it had simply made up the quote.

When I pointed out the error and asked how this had happened, GAI explained:

I incorrectly generalized and answered based on a commonly followed general rule.”

Mind you, I had given it specific, detailed instructions and prompts—things I had learned from CLE courses and articles about how to use AI and get accurate outputs. These included telling it not to make anything up, to double-check sources, and to provide links to public, official sources for every “fact” it retrieved from the internet.

What I got was a lie, wrapped in a polished, confident tone, dressed up like a real legal citation—because GAI is built to give me what I want and to sound persuasive and helpful, even when it’s dead wrong.

Lawyers’ Misuse of AI Continues to Make Headlines

Different courts, different lawyers, but the failure is identical: If you don’t read the case, the court will—and then you’ll make the news. Here is a partial list of headlines just from the past few weeks–hyperlinked to their source:

May 14, 2025 AI Hallucinations Strike Again: Two More Cases Where Lawyers Face Judicial Wrath for Fake Citations 

May 21, 2025, Judge Considers Sanctions Against Attorneys in Prison Case for Using AI in Court Filings

May 29, 2025, Lawyer Sanctioned $6,000 for AI-Generated Fake Legal Citations.

May 30, 2025, Southern District of Florida Sanctions Lawyers for Submission of AI Hallucinated Caselaw

May 31, 2025, US Lawyer Sanctioned After Being Caught Using ChatGPT for Court Brief

This Should Not be News to Most of Us

The problem of overworked lawyers attempting to take shortcuts is not new. Only the method has changed. For decades, lawyers have been getting sanctioned or called out by opposing counsel for:

  • Using the headnote from a paid online legal research tool as a “quote” without reading the opinion to confirm it.
  • Copying a pleading from a prior case and filing it without checking if the law still applies.
  • Lifting a motion from a CLE binder, online research tool, or lawyer listserv conversation and passing it off as their own.
  • Using the analysis from someone else’s case within the firm, without knowing or understanding the facts, court, or procedural history of that case. 

Every one of these examples has the same flaw: the lawyer wanted a way to circumvent doing the work we get paid to do i.e. think

The Real Problem Isn’t AI

AI isn’t the problem. It’s just the newest version of a long-standing temptation: to find a shortcut. Something to save time, make us look smart, or help us meet a deadline when the work hasn’t been done.

If you’re feeling pressure to use AI—or to do things faster, cheaper, or “more efficiently” than ever before—hear this:

You get paid to think, and no technology can replace your judgment or experience.

Your speed or formatting skills don’t determine your value. You are trained to analyze, reason, and argue. Your value lies in how you perceive what matters, identify what’s missing, and determine what it will take to achieve your client’s goals. You can’t delegate that to a machine just like you can’t outsource that to someone else’s pleading or form.

And don’t let fear push you to use a tool you don’t understand. Stop. Breathe. Learn what it can do. Learn what it can’t. Use it wisely—don’t rely on it to think for you, and don’t believe it when it assures you that it has. 

For Judges and Supervisors: A Fix Worth Considering

To stop this problem from recurring, consider this simple fix:

Require every pleading filed with the court that contains a reference, cite, or quotation to any authority to be internally hyperlinked to an attached appendix that includes a copy of the source with the relevant rule, holding, or quote highlighted for the court’s convenience.

This should become standard, just like a certificate of service. Lawyers should also apply this requirement to the work of those they supervise. And no, the clients should not pay for this “extra” work; it overhead–the price of doing business in the era of AI.

The Technology Changed, the Job Didn’t

This isn’t about shaming lawyers. It’s about reminding us who we are.

We are not prompt engineers or data processors. We are professionals who took an oath and have duties to our clients, the courts, and the public.

So please, don’t be a headline. 

Read the case. Check the quotes. Confirm the law is still good. And don’t rely on any tool that doesn’t distinguish between the truth and a lie.

The Larger Cost of Reckless GAI Use in Litigation

Jeanne M Huey 

March 3, 2025

Courts, opposing parties, and clients all suffer when lawyers fail to properly vet work generated by artificial intelligence (“AI”), leading to wasted judicial resources, procedural delays, and broken client and public trust. 

Gauthier: GAI Hallucinations in Court Filings 

A recent case serves as a stark reminder that competence and diligence in using generative AI (“GAI”) tools is not optional but is an ethical imperative. 

In Gauthier v. Goodyear Tire & Rubber Co., a lawyer was sanctioned for submitting unverified GAI-generated content in a response to a motion for summary judgment. No. 1:23-CV-281 (E.D. Tex. Nov. 25, 2024). This included two citations to entirely fabricated cases and nonexistent quotations from seven actual cases—that is, “hallucinations.” 

In sanctioning the lawyer, the court determined that the lawyer submitted a false statement of law to the court and failed to rectify the mistake after opposing counsel pointed it out in their reply. Instead, the lawyer did not address the error until the court issued a show-cause order. The court also remarked that “it is unclear what legal research, if any,” the lawyer conducted before filing the response. Id. at *5. 

The lawyer was ordered to pay $2,000 into the court registry, to take one hour of CLE on the topic of GAI in the legal field, and to provide his client with a copy of the sanctions order. The sanctions imposed were relatively mild considering the potential harm to the client, the opposing party, and the integrity of the court system. 

Wasted Resources and Judicial Frustration 

GAI-generated errors like those in Gauthier force courts to spend time unraveling the problem rather than addressing the substantive legal issues. That’s why courts increasingly impose strict requirements in their local rules regarding GAI use. 

In addition to finding that the lawyer violated Federal Rule of Civil Procedure 11, the Gauthier court found violations of its local rules requiring lawyers to exercise candor and diligence and mandating that lawyers review and verify any computer-generated content before submitting it. 

Moreover, the court likely could have found a violation of ABA Model Rule 3.3 pertaining to candor to the tribunal, and ABA Model Rule 8.4(c) and (d) for engaging in conduct “involving dishonesty, fraud, deceit or misrepresentation” and that is “prejudicial to the administration of justice.” The lawyer also could have been referred to the local disciplinary authority by the court, opposing counsel, opposing party, or client. 

Rule 11 already provides a framework for addressing these concerns, but if lawyers continue to disregard it, courts may impose stricter measures. Lead counsel might be required to certify or swear to the accuracy of every filing—something their signature should already indicate. Courts could mandate CLE credits on AI use as a condition for good standing or pro hac vice admission. Additional burdens may include requiring attorneys to keep records of all GAI-generated prompts used in preparing the filing or to attach every cited case as an appendix with key holdings and quotations highlighted. Courts might also implement prefiling review requirements, mandating independent verification of AI-generated content before docketing. 

Unnecessary Burdens and Expenses for Opposing Parties 

In Gauthier, the opposing counsel said that they had spent significant time and resources—over $7,500 in fees—determining that the citations and quotes in the response were fictitious and bringing the issue to the court’s attention. However, lawyers who use GAI irresponsibly do not simply create wasted work for their adversaries. An offending lawyer almost certainly violates their ethical obligations under ABA Model Rule 3.1, which prohibits lawyers from bringing or defending claims that lack a legal or factual basis, ABA Model Rule 3.2, which requires lawyers to make reasonable efforts to expedite litigation, and ABA Model Rule 3.4, which mandates fairness to opposing parties by prohibiting conduct that delays or burdens litigation without substantial justification. 

Although courts may not always directly compensate opposing counsel for fees incurred by the other side’s careless use of GAI—the Gauthier court did not—judges are not blind to the larger impact of such conduct. If GAI-related errors continue to occur, lawyers can expect (and should ask) courts to shift the burden of these costs onto the offending lawyers through harsher sanctions and fee-shifting orders. 

Sanctions and Eroded Client Trust 

The most direct impact of GAI incompetence is on the lawyer-client relationship. In Gauthier, the court ordered that the sanctions order be provided to the lawyer’s client so that the client would know that his lawyer had been sanctioned for submitting false information to the court. 

Beyond the immediate embarrassment and potential financial consequences to the lawyer, basic competence is at play. ABA Model Rule 1.1 Comment 8 requires lawyers to maintain competence in the technology that they are using in their practice, including understanding the technology’s benefits and risks. A lawyer who does not understand how GAI functions or fails to verify GAI-generated work has not met this duty of competence. While clients expect lawyers to be efficient and cost-effective and may think that using GAI will help reduce legal fees, they do not pay lawyers to take risks with their case outcomes; they certainly don’t pay for them to misrepresent the law and be embarrassed in front of a judge. A single GAI-related error could permanently undermine clients’ faith in their attorney and end the representation. 

The Public’s Perception: Lawyers Need to Control the Narrative 

Currently, the public is receiving two conflicting messages about GAI: 

• “GAI will replace lawyers.” 

• “Lawyers are getting sanctioned because they do not know how to use GAI.” 

Neither narrative is good for the profession. Clients will resist paying for legal expertise if GAI is considered an inevitable replacement. Public confidence in the legal system will erode if lawyers can’t be trusted to understand and use GAI correctly. 

The only way to control this perception is through responsible behavior, professionalism, and a commitment to meeting and exceeding our ethical duties under the rules when we use GIA and related technology. It will also be necessary to effectively communicate with clients about GAI and how it might be used within the firm, such as in billing software, or in their case, which may require their informed consent. 

Ethics opinions and court rules concerning GAI differ across jurisdictions, and the applicable standards of care are evolving rapidly. Staying current and ethically integrating GAI tools into our law practices will take time and attention. Those who fail to do so risk not just sanctions but harm to both their professional reputation and the credibility of the legal system: lawyers who understand the risks and benefits of GAI and implement its use responsibly will not only protect their practice but also strengthen public trust in the profession and help shape its future. 

© 2024 by the American Bar Association. Reproduced with permission. All rights reserved. This information or any portion thereof may not be copied or disseminated in any form or by any means or stored in an electronic database or retrieval system without the express written consent of the American Bar Association. 

Generative AI for Lawyers Part 2: Maintaining Confidentiality

ABA Formal Op. 512 focuses on the risks of using generative AI (GAI) in legal practice, with a key concern being the confidentiality of client information. Under ABA Model Rule 1.6, lawyers are obligated to protect all client-related information, including preventing inadvertent or unauthorized access. ABA Model Rule 1.9(c) extends this duty to former clients, and ABA Model Rule 1.18(b) to prospective clients.

Unauthorized Disclosure of Confidential Information: What Is the Risk with GAI?

Self-learning GAI poses a higher risk to client confidentiality than other technology used in a modern law practice because it can retain and reuse input data (prompts), increasing the chance of inadvertent disclosure or cross-use in other cases. This is true whether the information is used within a firm’s closed system—where the stored data is only used internally—or outside the firm in an open system—where data is shared with external sources.

Why do lawyers need to be concerned about inputting confidential information into an internal firm or “closed” GAI system? The answer lies in the distinction between access to confidential information and the use of that information within a firm. While lawyers and staff typically have access to all of the firm’s clients’ confidential information, using that information to prompt the firm’s self-learning GAI system creates a real risk that one client’s information may be applied to other clients’ cases. This may result in a breach of the confidentiality obligations owed to the first client and could occur without either lawyer realizing that a violation has taken place.

This risk is not just hypothetical. Multiple ethics opinions, including Opinion 512 and those issued by the Florida Bar and Pennsylvania & Philadelphia Bars, emphasize that self- learning GAI tools may inadvertently cause the disclosure of client information even in a closed system used exclusively within a single law firm.

Informed Consent—A Prerequisite for Using Confidential Information with GAI

So, what can be done to avoid a violation of Model Rule 1.6 for unauthorized disclosure of confidential client information under these circumstances? Opinion 512 concludes that, due to the unique risks posed by self-learning GAI, lawyers should obtain “informed consent” from the client before using any information related to the representation in GAI prompts—even within a firm’s “closed system.”

The opinion is quick to note that informed consent cannot be accomplished by a boilerplate acknowledgment or notice clause in an engagement letter. Informed consent is a defined term in ABA Model Rule 1.0(e) and requires that the lawyer provide the client with “adequate information and explanation about the material risks of and reasonably available alternatives to” the proposed conduct.

Opinion 512 explains that “adequate information and explanation” under these conditions calls for a “meaningful dialogue” with the client that includes:

  • the lawyer’s best judgment about why the GAI tool is being used;
  • the extent of and specific information about the risks involved in disclosing client information;
  • particulars about the kinds of client information that will be disclosed;
  • the ways in which others might use the information against the client’s interests;
  • a clear explanation of the GAI tool’s benefits to the representation; and
  • the risk that later users or beneficiaries of the GAI tool will have access to information relating to the representation.

This list, from Opinion 512, makes it clear that any lawyer seeking informed consent must have more than a general awareness of GAI technology. They must, as ABA Model Rule 1.1 Comment 8 sets out, be competent in understanding the benefits and risks of that technology.

Obtaining informed consent here aligns with a lawyer’s duty to communicate effectively with their client about the work being undertaken in their case. Under ABA Model Rule 1.4, lawyers must inform clients about decisions affecting their representation, including the means proposed to achieve the client’s objectives. When using client confidential information in a self-learning GAI system is proposed, the client must be given enough information to make an informed decision about whether to permit it.

Finally, while “informed consent” does not require written consent, the best practice is to confirm the client’s consent either 1) in a writing by the client or 2) from the lawyer confirming the client’s oral consent. (See ABA Model Rule 1.0(b)).This approach helps protect the client’s interests and discharge the lawyer’s ethical duties, ensuring that trust and transparency remain intact throughout the representation.

In Part 3 of this series, we will explore how you can use self-learning GAI tools to benefit your client without disclosing information about the representation (confidential information) or obtaining informed consent.

© 2024 by the American Bar Association. Reproduced with permission. All rights reserved. This information or any portion thereof may not be copied or disseminated in any form or by any means or stored in an electronic database or retrieval system without the express written consent of the American Bar Association.