Lawyers continue to be misled by AI-generated case law that appears genuine but isn’t. This post is about a personal experience and why lawyers can’t afford to stop thinking.
I Gave GAI Clear Instructions: It Still Lied
A few weeks ago, for fun, I asked the GAI program I use to look on the internet and see if there was a quote on a specified topic from a “Founding Father”. Within seconds, it provided me with an on-point quote, which was attributed to John Adams, accompanied by a hyperlinked citation. It was the best party trick ever–until it wasn’t. Because the quote didn’t exist. Anywhere. When I called it out, GAI replied: “It sounds like something John Adams would say.”
Yesterday, I tested it again.
I asked for it to find the rule for a certain proposition. A rule of civil procedure that I knew existed. It told me the rule didn’t exist. I wanted to see if it would correct itself, so asked it to back that up with a case and a link to the statute. It did—with confidence. It even provided a quote from the case that it said supported the position it had taken. Except it was still wrong–the rule did exist and it had simply made up the quote.
When I pointed out the error and asked how this had happened, GAI explained:
“I incorrectly generalized and answered based on a commonly followed general rule.”
Mind you, I had given it specific, detailed instructions and prompts—things I had learned from CLE courses and articles about how to use AI and get accurate outputs. These included telling it not to make anything up, to double-check sources, and to provide links to public, official sources for every “fact” it retrieved from the internet.
What I got was a lie, wrapped in a polished, confident tone, dressed up like a real legal citation—because GAI is built to give me what I want and to sound persuasive and helpful, even when it’s dead wrong.
Lawyers’ Misuse of AI Continues to Make Headlines
Different courts, different lawyers, but the failure is identical: If you don’t read the case, the court will—and then you’ll make the news. Here is a partial list of headlines just from the past few weeks–hyperlinked to their source:
May 14, 2025 AI Hallucinations Strike Again: Two More Cases Where Lawyers Face Judicial Wrath for Fake Citations
May 21, 2025, Judge Considers Sanctions Against Attorneys in Prison Case for Using AI in Court Filings
May 29, 2025, Lawyer Sanctioned $6,000 for AI-Generated Fake Legal Citations.
May 30, 2025, Southern District of Florida Sanctions Lawyers for Submission of AI Hallucinated Caselaw
May 31, 2025, US Lawyer Sanctioned After Being Caught Using ChatGPT for Court Brief
This Should Not be News to Most of Us
The problem of overworked lawyers attempting to take shortcuts is not new. Only the method has changed. For decades, lawyers have been getting sanctioned or called out by opposing counsel for:
- Using the headnote from a paid online legal research tool as a “quote” without reading the opinion to confirm it.
- Copying a pleading from a prior case and filing it without checking if the law still applies.
- Lifting a motion from a CLE binder, online research tool, or lawyer listserv conversation and passing it off as their own.
- Using the analysis from someone else’s case within the firm, without knowing or understanding the facts, court, or procedural history of that case.
Every one of these examples has the same flaw: the lawyer wanted a way to circumvent doing the work we get paid to do i.e. think.
The Real Problem Isn’t AI
AI isn’t the problem. It’s just the newest version of a long-standing temptation: to find a shortcut. Something to save time, make us look smart, or help us meet a deadline when the work hasn’t been done.
If you’re feeling pressure to use AI—or to do things faster, cheaper, or “more efficiently” than ever before—hear this:
You get paid to think, and no technology can replace your judgment or experience.
Your speed or formatting skills don’t determine your value. You are trained to analyze, reason, and argue. Your value lies in how you perceive what matters, identify what’s missing, and determine what it will take to achieve your client’s goals. You can’t delegate that to a machine just like you can’t outsource that to someone else’s pleading or form.
And don’t let fear push you to use a tool you don’t understand. Stop. Breathe. Learn what it can do. Learn what it can’t. Use it wisely—don’t rely on it to think for you, and don’t believe it when it assures you that it has.
For Judges and Supervisors: A Fix Worth Considering
To stop this problem from recurring, consider this simple fix:
Require every pleading filed with the court that contains a reference, cite, or quotation to any authority to be internally hyperlinked to an attached appendix that includes a copy of the source with the relevant rule, holding, or quote highlighted for the court’s convenience.
This should become standard, just like a certificate of service. Lawyers should also apply this requirement to the work of those they supervise. And no, the clients should not pay for this “extra” work; it overhead–the price of doing business in the era of AI.
The Technology Changed, the Job Didn’t
This isn’t about shaming lawyers. It’s about reminding us who we are.
We are not prompt engineers or data processors. We are professionals who took an oath and have duties to our clients, the courts, and the public.
So please, don’t be a headline.
Read the case. Check the quotes. Confirm the law is still good. And don’t rely on any tool that doesn’t distinguish between the truth and a lie.
