Generative AI for Lawyers Part 3: Ethical Use of Hypotheticals with GAI

By Jeanne M. Huey

In the previous entry in this series, we discussed ABA Formal Opinion 512’s admonition against inputting any information relating to a client’s representation (confidential information under ABA Model Rule 1.6) into self-learning generative artificial intelligence (GAI) software without first obtaining the client’s “informed consent”.

Consider, however, that there are a variety of ways to use self-learning GAI without either disclosing any client confidential information or obtaining the client’s “informed consent.” These methods allow you to reap the benefits of self-learning GAI for your clients while maintaining your ethical obligations under the relevant rules and opinions.

Ethical Use of Hypotheticals in Case Discussions

ABA Model Rule 1.6 Comment 4 advises that using a hypothetical to “discuss issues relating to the representation is permissible so long as there is no reasonable likelihood that the listener will be able to ascertain the identity of the client or the situation involved.” In any public-facing discussion, these prohibitions are particularly relevant, because audience members can easily connect the dots from contextual information in hypotheticals to learn who the client is or simply look up the speaker’s cases online to get the same information. This makes it nearly impossible to say that there is “no reasonable likelihood” that client confidential information will be disclosed when lawyers use hypotheticals based on their clients and cases in situations where the lawyer’s identity is known.

ABA Formal Opinion 480 (public comments by lawyers such as blog posts and social media) and ABA Formal Opinion 511R (lawyers discussing cases on lawyer forums such as LISTSERV) take this principle further. Both opinions warn against the use of hypotheticals in any public context without the client’s informed consent when there’s a possibility that third parties could deduce the identity or specific details of a client’s situation.

The combined guidance from these ethics opinions and related rules establish a high bar for confidentiality in public legal commentary and make it clear that clever wordsmithing and linguistic acrobatics alone (i.e., using hypotheticals) will not insulate lawyers from the risk of liability for a breach of the rules regarding confidentiality in these kinds of settings.

Advanced Strategies for Securely Using Hypotheticals with GAI

While any use of a hypothetical to discuss a client or case requires vigilance regarding client confidentiality, GAI may offer a more protective environment for the use of a well-written hypothetical.

In contrast to using hypotheticals in public discussions or commentary, using a hypothetical in a GAI prompt significantly reduces the likelihood of associating a scenario with a specific client due to GAI’s inherent anonymity; there is no identifiable author or audience involved. This allows attorneys to use hypotheticals to effectively analyze complex legal issues while safeguarding client confidentiality and to do so without seeking informed consent from the client. This distinction aligns with ABA guidelines on confidentiality, which emphasize that the risk of disclosure varies depending on the audience and platform used.

Consider the following approaches:

1.     Abstract the Core Legal Issue

When crafting hypotheticals, distill the client’s situation to its fundamental legal principles, omitting any specifics that could reveal their identity—such as names, dates, financial figures, or unique circumstances. Focus on the legal doctrines or statutory interpretations at play rather than granular details.

Example:

Instead of asking about a “global pharmaceutical company facing allegations of off-label marketing in violation of FDA regulations,” reframe the inquiry to address “a corporation navigating regulatory compliance challenges in the context of strict federal oversight.” This allows you to explore the complexities of regulatory compliance without disclosing identifiable information.

2.     Emphasize Legal Theories and Precedents

Center your inquiries on broader legal theories, jurisprudence, or procedural rules. You can gain insights without tying the discussion to specific client facts by focusing on legal frameworks and landmark cases.

Example:

Rather than delving into the specifics of a complex international arbitration your client is involved in, you might ask, “How have recent court decisions impacted the enforcement of arbitral awards under the New York Convention?” This approach examines critical legal interpretations without disclosing client involvement.

3.     Frame Questions Around Legal Processes and Strategies

Pose questions that concentrate on legal processes, litigation strategies, or best practices rather than specific factual scenarios. This enables you to gather strategic insights while maintaining confidentiality.

Example:

If advising a client on mitigating risks in cross-border mergers, you could ask, “What are effective due diligence strategies for uncovering potential liabilities in international M&A transactions?” This allows exploration of procedural tactics without referencing the client’s situation.

4.    Use Analogous Legal Contexts

Construct hypotheticals using different contexts or industries that parallel your client’s issues. By transferring the legal problem to a comparable but distinct setting, you can examine relevant principles while further obscuring client details.

Example:

If your client is an energy company facing environmental litigation over alleged contamination, you might frame the hypothetical around a manufacturing company dealing with toxic-tort claims. The legal principles regarding environmental liability and defense strategies remain pertinent, but the context shift protects client confidentiality.

By employing these techniques, you can effectively leverage GAI tools to explore complex legal issues while rigorously safeguarding client confidentiality.

© 2024 by the American Bar Association. Reproduced with permission. All rights reserved. This information or any portion thereof may not be copied or disseminated in any form or by any means or stored in an electronic database or retrieval system without the express written consent of the American Bar Association.

Generative AI for Lawyers Part 2: Maintaining Confidentiality

ABA Formal Op. 512 focuses on the risks of using generative AI (GAI) in legal practice, with a key concern being the confidentiality of client information. Under ABA Model Rule 1.6, lawyers are obligated to protect all client-related information, including preventing inadvertent or unauthorized access. ABA Model Rule 1.9(c) extends this duty to former clients, and ABA Model Rule 1.18(b) to prospective clients.

Unauthorized Disclosure of Confidential Information: What Is the Risk with GAI?

Self-learning GAI poses a higher risk to client confidentiality than other technology used in a modern law practice because it can retain and reuse input data (prompts), increasing the chance of inadvertent disclosure or cross-use in other cases. This is true whether the information is used within a firm’s closed system—where the stored data is only used internally—or outside the firm in an open system—where data is shared with external sources.

Why do lawyers need to be concerned about inputting confidential information into an internal firm or “closed” GAI system? The answer lies in the distinction between access to confidential information and the use of that information within a firm. While lawyers and staff typically have access to all of the firm’s clients’ confidential information, using that information to prompt the firm’s self-learning GAI system creates a real risk that one client’s information may be applied to other clients’ cases. This may result in a breach of the confidentiality obligations owed to the first client and could occur without either lawyer realizing that a violation has taken place.

This risk is not just hypothetical. Multiple ethics opinions, including Opinion 512 and those issued by the Florida Bar and Pennsylvania & Philadelphia Bars, emphasize that self- learning GAI tools may inadvertently cause the disclosure of client information even in a closed system used exclusively within a single law firm.

Informed Consent—A Prerequisite for Using Confidential Information with GAI

So, what can be done to avoid a violation of Model Rule 1.6 for unauthorized disclosure of confidential client information under these circumstances? Opinion 512 concludes that, due to the unique risks posed by self-learning GAI, lawyers should obtain “informed consent” from the client before using any information related to the representation in GAI prompts—even within a firm’s “closed system.”

The opinion is quick to note that informed consent cannot be accomplished by a boilerplate acknowledgment or notice clause in an engagement letter. Informed consent is a defined term in ABA Model Rule 1.0(e) and requires that the lawyer provide the client with “adequate information and explanation about the material risks of and reasonably available alternatives to” the proposed conduct.

Opinion 512 explains that “adequate information and explanation” under these conditions calls for a “meaningful dialogue” with the client that includes:

  • the lawyer’s best judgment about why the GAI tool is being used;
  • the extent of and specific information about the risks involved in disclosing client information;
  • particulars about the kinds of client information that will be disclosed;
  • the ways in which others might use the information against the client’s interests;
  • a clear explanation of the GAI tool’s benefits to the representation; and
  • the risk that later users or beneficiaries of the GAI tool will have access to information relating to the representation.

This list, from Opinion 512, makes it clear that any lawyer seeking informed consent must have more than a general awareness of GAI technology. They must, as ABA Model Rule 1.1 Comment 8 sets out, be competent in understanding the benefits and risks of that technology.

Obtaining informed consent here aligns with a lawyer’s duty to communicate effectively with their client about the work being undertaken in their case. Under ABA Model Rule 1.4, lawyers must inform clients about decisions affecting their representation, including the means proposed to achieve the client’s objectives. When using client confidential information in a self-learning GAI system is proposed, the client must be given enough information to make an informed decision about whether to permit it.

Finally, while “informed consent” does not require written consent, the best practice is to confirm the client’s consent either 1) in a writing by the client or 2) from the lawyer confirming the client’s oral consent. (See ABA Model Rule 1.0(b)).This approach helps protect the client’s interests and discharge the lawyer’s ethical duties, ensuring that trust and transparency remain intact throughout the representation.

In Part 3 of this series, we will explore how you can use self-learning GAI tools to benefit your client without disclosing information about the representation (confidential information) or obtaining informed consent.

© 2024 by the American Bar Association. Reproduced with permission. All rights reserved. This information or any portion thereof may not be copied or disseminated in any form or by any means or stored in an electronic database or retrieval system without the express written consent of the American Bar Association.

Generative AI for Lawyers Part 1: Competence, Professionalism, and Risks

By Jeanne M. Huey

 © 2024 by the American Bar Association. Reproduced with permission. All rights reserved. This information or any portion thereof may not be copied or disseminated in any form or by any means or stored in an electronic database or retrieval system without the express written consent of the American Bar Association. 

In today’s fast-paced legal world, lawyers face a critical challenge: how to maintain competence amidst rapidly evolving technology—specifically—generative AI (GAI) tools. The American Bar Association’s Formal Ethics Opinion 512 sheds light on this important issue, guiding us on the path to staying ethical while navigating new technologies that help us to better serve our clients. Competence for lawyers in 2024 involves more than just knowledge of the law. Under ABA Model Rule 1.1, lawyers must provide competent representation, which includes having the legal knowledge, skill, thoroughness, and preparation reasonably necessary for their work. Under Comment 8 to Rule 1.1, this also encompasses an understanding of the technology that lawyers use in their practice. Importantly, competence in the tech that lawyers use doesn’t require every lawyer to become a tech expert or AI specialist. As Opinion 512 reminds us, however, it is not enough to simply hire someone else who does know about the risks and benefits of that technology. Lawyers must have a “reasonable understanding” of the capabilities and limitations of the specific technology that they use. That includes GAI. Lawyers can meet this standard either by acquiring a reasonable understanding of the benefits and risks of the GAI tools on their own, or they can draw on the expertise of others who can provide them with guidance about the relevant technology’s capabilities and limitations. Put another way, remaining ignorant about the technology used in their law practice—such as GAI—is not an option. And, of course, this isn’t a one-time task, because technology, particularly GAI, is evolving rapidly and staying competent means keeping pace with these advancements.

Understanding the Risks of Generative AI—What Does GAI Have to Say about It? When asked about the number one risk to lawyers who use GAI for legal work, the GAI program I use (ChatGPT) told me: 

The number one risk associated with lawyers using ChatGPT is providing inaccurate or misleading legal advice, often due to the AI’s limitationsin understanding legal context and nuances.

[Emphasis added.] We have all heard about lawyers who blindly took flawed or completely unfounded legal analysis or “pretend” caselaw generated by GAI and plugged it into a pleading and filed it with the court. One would think by now that all lawyers would understand that GAI is not a substitute for actual legal work and analysis—the work that lawyers are trained in and get paid to provide to their clients. Nonetheless, at least a few attorneys have recently been referred to their local disciplinary authority for citing a nonexistent case generated by ChatGPT in a legal brief, which the court found violated Federal Rule of Civil Procedure 11 and amounted to the submission of false statement to the court. It also likely violated the attorneys’ duty of candor to the tribunal under ABA Model Rule 3.3 or its equivalent.

The GAI program then went on to remind me:

AI tools can generate convincing-sounding responses that might seem factually or legally correct but can include factual inaccuracies, outdated information, or what’s known as “AI hallucinations”—where the AI confidently produces false or fabricated information.

This statement is true—and is part of the allure of using GAI. The “convincing-sounding” responses are so tempting to simply cut and paste. Consider, however, that soon—if it has not already happened—everyone in the legal system (i.e. judges, lawyers, paraprofessionals, clients, and professors) will be able to recognize the difference between GAI-generated text and actual legal analysis and argument written by a skilled, smart, inciteful, and intelligent lawyer. When that happens, anyone who chooses to continue to use lightly edited GAI text even in everyday correspondence will lose credibility with their colleagues and the courts in which they practice.

Competence in the AI Age

So, what does competence in this AI-driven era require? First and foremost, it involves independent verification of AI output. Simply copy-pasting what the tool produces and sending it off to a client or court is not enough. And trying to mask your AI-generated text by running it through a program that “humanizes” it is not a solution. Try it once and you will see why. There are no shortcuts when it comes to the actual practice of law. Lawyers must apply their legal knowledge and judgment to review the AI’s work and modify it accordingly.

The degree of verification needed will depend on the task. But never forget that clients usually face complex, emotional, and high-stakes situations, and they rely on lawyers for more than just legal knowledge—they need understanding, legal guidance, strategic thinking, and human empathy. And they deserve (and lawyers get paid for) far more than just the output of a machine.

No amount of technology can substitute for the tailored advice that comes from years of training, real-world experience, and the trust built through personal relationships. As technology reshapes how lawyers work, understanding how to navigate these changes ethically and responsibly is vital for keeping clients happy, maintaining competence, and complying with our rules of professional conduct.