No Shortcuts: Why Fundamentals Still Win in Law

My latest blog takes a hard look at recent cases where lawyers skipped the basics, and why “no shortcuts” isn’t just a football mantra — it’s a professional requirement.

By Jeanne M. Huey

The Basics Are the Story

I could have headlined this “Is This Still Happening?” But we’ve been asking that for over two years, and the answer is still “yes.” By “this,” I mean lawyers using AI for legal research — and not bothering to verify the results. And not at the margins — we’re talking about big international firms that have every research tool, policy, training program, and resource that money can buy.

Nobody expects us to be perfect all of the time. We’ve all fumbled — accidentally misquoted a case, misspelled a cite, or leaned on a holding that later got picked apart. Those are part of the game, and when they are caught and fixed, there is no penalty. But using AI-generated citations without checking them? That’s not a misstep — that’s running a hook and ladder on the very first play.

When Fundamentals Get Skipped

Last month, in Johnson v. Dunn, three Butler Snow LLP lawyers were sanctioned after one of them admitted to using unverified AI-generated citations in court filings. Until early 2024, this lawyer’s AI use had been purely personal —finding vacation spots, looking up fitness info, even researching colleges for his kids.

He told the Judge that he knew the firm’s AI policy but did not comply with it. How a tool he had only used casually for travel tips became a substitute for the firm’s paid legal research services — which cost thousands each month for a reason — is hard to understand. As a partner, he could have asked any paralegal, associate, or staff person to pull the cases and highlight the relevant passages for his review; reviewing them would have taken minutes. The only charitable explanation is that he thought AI was a legitimate legal research platform. If that’s true, this case shows that when it comes to the use of technology in the practice of law, even strong institutional safeguards mean little without constant education and personal accountability.

The Brief Bank

At the show-cause hearing in the case, another Butler Snow lawyer explained why he hadn’t checked the citations added by his colleague. His reasoning: Many of their cases involve the same law and precedent, so his team often pulls citations from older briefs and plugs them into briefs on the same topics. The suggestion was that this eliminated the need to rely on AI and was safer because they had used those cases before.

The Judge felt differently, writing that this practice was one of three factors that  “deepened rather than allayed” her concerns. 

The problem with this practice? Past use isn’t proof of present accuracy. Laws change. Precedent gets overturned—context matters. 

This same concern applies to lawyers who borrow motions or briefs from other lawyers — or who post to a listserv asking for forms or legal advice from other lawyers that they plan to file as their own. Whether the information comes from your firm’s file cabinet, another lawyer’s Dropbox, or an AI chatbot, if you haven’t verified it for the case at hand, you’re taking a shortcut you can’t defend.

Cite & Highlight: A Run Up the Middle

Here’s an easy solution: require every case cited to be attached in an appendix with the relevant quote or holding highlighted. I call it “Cite & Highlight.” Courts could adopt it tomorrow. Firms could implement it today. The time it takes is nothing compared to the reputational cost of getting burned by a bad citation.

In Johnson v. Dunn, the partner who dropped AI-generated citations into a filing without review failed to make the reasonable inquiry Rule 11 requires. In Lacey v. State Farm (C.D. Cal. May 5, 2025), several large national firms made the same mistake by using AI — this time with 27 citations, nine wrong and at least two nonexistent. Whether the bad law comes from AI, an old brief, or a motion written by another lawyer, the problem is the same: If your name is going on it, you’re skipping the work. And in law, like football, there’s no substitute for work.

Flash and Fundamentals

Oregon football has a reputation for flash — the best uniforms, the newest tech, the gleaming facilities (thanks, Uncle Phil). But that’s not why they win.

Coach Lanning tells his players to do the work every single day and ask themselves: “How can I improve?” To own mistakes. To learn from people who do it better. To have the patience to get it right, because fast and wrong is still wrong. That lesson fits the courtroom as well as it does the gridiron.

The ethics rules don’t demand excellence — they set the baseline for lawyer ethics. But clients don’t hire us for the baseline. They pay us to prepare, check, and deliver our best. Yes it is hard, but if it was easy, anyone could do it. The practice of law requires intense work done meticulously every time.

Senior lawyers: model that standard. Young lawyers: don’t let pressure for speed push you into skipping steps. Stop feeding the myth that good work can be done in a flash. Shortcuts might buy you a little time today, but they undermine your work product, your discipline, and your success in the long run.

Law is not a job — it’s a profession. Every filing, every argument, every case reflects our expertise and pride in our work. Wins — in football or in law — are constructed day by day, decision by decision. And there are no shortcuts.

Go Ducks.

Oops, I Did It Again: Lawyers Rely on AI at Your Peril

Lawyers continue to be misled by AI-generated case law that appears genuine but isn’t. This post is about a personal experience and why lawyers can’t afford to stop thinking.

I Gave GAI Clear Instructions: It Still Lied

A few weeks ago, for fun, I asked the GAI program I use to look on the internet and see if there was a quote on a specified topic from a “Founding Father”. Within seconds, it provided me with an on-point quote, which was attributed to John Adams, accompanied by a hyperlinked citation. It was the best party trick ever–until it wasn’t. Because the quote didn’t exist. Anywhere. When I called it out, GAI replied: “It sounds like something John Adams would say.”

Yesterday, I tested it again.

I asked for it to find the rule for a certain proposition. A rule of civil procedure that I knew existed. It told me the rule didn’t exist. I wanted to see if it would correct itself, so asked it to back that up with a case and a link to the statute. It did—with confidence. It even provided a quote from the case that it said supported the position it had taken. Except it was still wrong–the rule did exist and it had simply made up the quote.

When I pointed out the error and asked how this had happened, GAI explained:

I incorrectly generalized and answered based on a commonly followed general rule.”

Mind you, I had given it specific, detailed instructions and prompts—things I had learned from CLE courses and articles about how to use AI and get accurate outputs. These included telling it not to make anything up, to double-check sources, and to provide links to public, official sources for every “fact” it retrieved from the internet.

What I got was a lie, wrapped in a polished, confident tone, dressed up like a real legal citation—because GAI is built to give me what I want and to sound persuasive and helpful, even when it’s dead wrong.

Lawyers’ Misuse of AI Continues to Make Headlines

Different courts, different lawyers, but the failure is identical: If you don’t read the case, the court will—and then you’ll make the news. Here is a partial list of headlines just from the past few weeks–hyperlinked to their source:

May 14, 2025 AI Hallucinations Strike Again: Two More Cases Where Lawyers Face Judicial Wrath for Fake Citations 

May 21, 2025, Judge Considers Sanctions Against Attorneys in Prison Case for Using AI in Court Filings

May 29, 2025, Lawyer Sanctioned $6,000 for AI-Generated Fake Legal Citations.

May 30, 2025, Southern District of Florida Sanctions Lawyers for Submission of AI Hallucinated Caselaw

May 31, 2025, US Lawyer Sanctioned After Being Caught Using ChatGPT for Court Brief

This Should Not be News to Most of Us

The problem of overworked lawyers attempting to take shortcuts is not new. Only the method has changed. For decades, lawyers have been getting sanctioned or called out by opposing counsel for:

  • Using the headnote from a paid online legal research tool as a “quote” without reading the opinion to confirm it.
  • Copying a pleading from a prior case and filing it without checking if the law still applies.
  • Lifting a motion from a CLE binder, online research tool, or lawyer listserv conversation and passing it off as their own.
  • Using the analysis from someone else’s case within the firm, without knowing or understanding the facts, court, or procedural history of that case. 

Every one of these examples has the same flaw: the lawyer wanted a way to circumvent doing the work we get paid to do i.e. think

The Real Problem Isn’t AI

AI isn’t the problem. It’s just the newest version of a long-standing temptation: to find a shortcut. Something to save time, make us look smart, or help us meet a deadline when the work hasn’t been done.

If you’re feeling pressure to use AI—or to do things faster, cheaper, or “more efficiently” than ever before—hear this:

You get paid to think, and no technology can replace your judgment or experience.

Your speed or formatting skills don’t determine your value. You are trained to analyze, reason, and argue. Your value lies in how you perceive what matters, identify what’s missing, and determine what it will take to achieve your client’s goals. You can’t delegate that to a machine just like you can’t outsource that to someone else’s pleading or form.

And don’t let fear push you to use a tool you don’t understand. Stop. Breathe. Learn what it can do. Learn what it can’t. Use it wisely—don’t rely on it to think for you, and don’t believe it when it assures you that it has. 

For Judges and Supervisors: A Fix Worth Considering

To stop this problem from recurring, consider this simple fix:

Require every pleading filed with the court that contains a reference, cite, or quotation to any authority to be internally hyperlinked to an attached appendix that includes a copy of the source with the relevant rule, holding, or quote highlighted for the court’s convenience.

This should become standard, just like a certificate of service. Lawyers should also apply this requirement to the work of those they supervise. And no, the clients should not pay for this “extra” work; it overhead–the price of doing business in the era of AI.

The Technology Changed, the Job Didn’t

This isn’t about shaming lawyers. It’s about reminding us who we are.

We are not prompt engineers or data processors. We are professionals who took an oath and have duties to our clients, the courts, and the public.

So please, don’t be a headline. 

Read the case. Check the quotes. Confirm the law is still good. And don’t rely on any tool that doesn’t distinguish between the truth and a lie.

Generative AI for Lawyers Part 1: Competence, Professionalism, and Risks

By Jeanne M. Huey

 © 2024 by the American Bar Association. Reproduced with permission. All rights reserved. This information or any portion thereof may not be copied or disseminated in any form or by any means or stored in an electronic database or retrieval system without the express written consent of the American Bar Association. 

In today’s fast-paced legal world, lawyers face a critical challenge: how to maintain competence amidst rapidly evolving technology—specifically—generative AI (GAI) tools. The American Bar Association’s Formal Ethics Opinion 512 sheds light on this important issue, guiding us on the path to staying ethical while navigating new technologies that help us to better serve our clients. Competence for lawyers in 2024 involves more than just knowledge of the law. Under ABA Model Rule 1.1, lawyers must provide competent representation, which includes having the legal knowledge, skill, thoroughness, and preparation reasonably necessary for their work. Under Comment 8 to Rule 1.1, this also encompasses an understanding of the technology that lawyers use in their practice. Importantly, competence in the tech that lawyers use doesn’t require every lawyer to become a tech expert or AI specialist. As Opinion 512 reminds us, however, it is not enough to simply hire someone else who does know about the risks and benefits of that technology. Lawyers must have a “reasonable understanding” of the capabilities and limitations of the specific technology that they use. That includes GAI. Lawyers can meet this standard either by acquiring a reasonable understanding of the benefits and risks of the GAI tools on their own, or they can draw on the expertise of others who can provide them with guidance about the relevant technology’s capabilities and limitations. Put another way, remaining ignorant about the technology used in their law practice—such as GAI—is not an option. And, of course, this isn’t a one-time task, because technology, particularly GAI, is evolving rapidly and staying competent means keeping pace with these advancements.

Understanding the Risks of Generative AI—What Does GAI Have to Say about It? When asked about the number one risk to lawyers who use GAI for legal work, the GAI program I use (ChatGPT) told me: 

The number one risk associated with lawyers using ChatGPT is providing inaccurate or misleading legal advice, often due to the AI’s limitationsin understanding legal context and nuances.

[Emphasis added.] We have all heard about lawyers who blindly took flawed or completely unfounded legal analysis or “pretend” caselaw generated by GAI and plugged it into a pleading and filed it with the court. One would think by now that all lawyers would understand that GAI is not a substitute for actual legal work and analysis—the work that lawyers are trained in and get paid to provide to their clients. Nonetheless, at least a few attorneys have recently been referred to their local disciplinary authority for citing a nonexistent case generated by ChatGPT in a legal brief, which the court found violated Federal Rule of Civil Procedure 11 and amounted to the submission of false statement to the court. It also likely violated the attorneys’ duty of candor to the tribunal under ABA Model Rule 3.3 or its equivalent.

The GAI program then went on to remind me:

AI tools can generate convincing-sounding responses that might seem factually or legally correct but can include factual inaccuracies, outdated information, or what’s known as “AI hallucinations”—where the AI confidently produces false or fabricated information.

This statement is true—and is part of the allure of using GAI. The “convincing-sounding” responses are so tempting to simply cut and paste. Consider, however, that soon—if it has not already happened—everyone in the legal system (i.e. judges, lawyers, paraprofessionals, clients, and professors) will be able to recognize the difference between GAI-generated text and actual legal analysis and argument written by a skilled, smart, inciteful, and intelligent lawyer. When that happens, anyone who chooses to continue to use lightly edited GAI text even in everyday correspondence will lose credibility with their colleagues and the courts in which they practice.

Competence in the AI Age

So, what does competence in this AI-driven era require? First and foremost, it involves independent verification of AI output. Simply copy-pasting what the tool produces and sending it off to a client or court is not enough. And trying to mask your AI-generated text by running it through a program that “humanizes” it is not a solution. Try it once and you will see why. There are no shortcuts when it comes to the actual practice of law. Lawyers must apply their legal knowledge and judgment to review the AI’s work and modify it accordingly.

The degree of verification needed will depend on the task. But never forget that clients usually face complex, emotional, and high-stakes situations, and they rely on lawyers for more than just legal knowledge—they need understanding, legal guidance, strategic thinking, and human empathy. And they deserve (and lawyers get paid for) far more than just the output of a machine.

No amount of technology can substitute for the tailored advice that comes from years of training, real-world experience, and the trust built through personal relationships. As technology reshapes how lawyers work, understanding how to navigate these changes ethically and responsibly is vital for keeping clients happy, maintaining competence, and complying with our rules of professional conduct.