The Sorcerer's Apprentice: How the Magic of Generative AI Raises Legal Risks
This article was written for The National Association of Legal Assistants and was published in the January 2024 issue of Facts & Findings.
We are all looking for a little bit of magic in our practices. Some people believe artificial intelligence is as close to magic as humans have come.
Most paralegals and lawyers already use AI at work. Casetext, Westlaw Edge, and CoCounsel all make use of AI to support legal offices. Many e-discovery platforms use AI to help with document production and review. There is no doubt that AI has a future in the legal industry. The question is whether the future is here.
Over the last few years, ChatGPT, Midjourney, and Dall-E have brought generative AI to the public consciousness. Before using these new generative AI products, it is important to understand the products and their risks. As with any new technology, it should not be deployed in a law practice without significant study and firm or managing partner approval.
WHAT IS AI?
Artificial intelligence is a bit of a misnomer. To date, none of the products that are publicly available have crossed the uncanny valley into actual thought. Instead, AI is a shorthand way to describe several processes.
MACHINE LEARNING
The most common form of AI in legal practice is machine learning. Machine learning is a process where computer systems improve the performance of specific tasks over time. Supervised machine learning is common in the e-discovery world. Supervised machine learning recognizes patterns within defined datasets. Many e-discovery platforms allow for a sample of documents to be reviewed. Based on that reviewed sample, machine learning allows the platform to determine what non-sample documents are likely to be responsive to discovery requests.
The next step in machine learning is self-supervised machine learning, which recognizes patterns without known outputs or predefined data.
Both supervised and self-supervised machine learning can have the ability to hone their performance through reinforcement. Reinforcement rewards the computer for creating correct correlations.
GENERATIVE ARTIFICIAL INTELLIGENCE
Generative AI creates new outputs based on the data it has been trained on. These outputs can be in the form of images, text, and more. Generative AI uses generative adversarial networks to create that content. Generative adversarial networks are made up of two neural networks: a generator, which creates the content, and a discriminator, which evaluates the content.
The most prevalent form of generative AI is large language models. Large language models are neural networks with billions of parameters trained on large quantities of unlabeled text using self-supervised learning or semi-supervised learning. Large language models use statistical models to analyze vast swaths of data to mimic understanding of connections between words and phrases.
Generative AI is not limited to large language models. It can also be trained on images in order to generate content that includes pictures and videos and trained even on voice to generate sound and soundalikes.
The most well-known generative AI product is likely ChatGPT, which stands for Chat Generative Pre-Trained Transformer. Originally, ChatGPT was a large language model. However, ChatGPT-4 is multi-modal, not only generating text but also images. Ask ChatGPT to write a motion to compel, and it will draft one for you.
RISKS OF GENERATIVE AI
If ChatGPT will draft a motion to compel, why not log on today? Well, there are a few issues to consider first.
INTELLECTUAL PROPERTY RIGHTS
Generative AI is trained on large datasets. There are currently open questions regarding whether the outputs of generative AI are infringing on the copyrights of the works on which they were trained. In September 2023, there were two class-action lawsuits filed against Meta and OpenAI alleging, among other things, copyright infringement not only for the copying of the works but also for creating outputs that are derivative of the copyrighted works. Currently, using outputs from generative AI risks liability for copyright infringement.
HALLUCINATIONS
When I prepare witnesses for depositions, I always warn them not to answer questions if they do not know the answer. Generative AI does not say, “I do not know.” Generative AI offers confident responses that are not always correct. This includes making up authorities. OpenAI’s terms of service explicitly state, “[g]iven the probabilistic nature of machine learning, use of our Services may in some situations result in incorrect Output that does not accurately reflect real people, places, or facts. You should evaluate the accuracy of any Output as appropriate for your use case, including by using human review of the Output.”
Two lawyers in New York were sanctioned for citing cases that ChatGPT made up.SCOTUS blog tested ChatGPT’s ability to answer questions about Supreme Court cases and found the product was able to accurately answer less than half the questions. These hallucinations mean that lawyers and paralegals cannot yet rely on generative AI’s outputs. Because of this, some courts have issued local rules requiring disclosures when ChatGPT or other generative AI products are used.
PRIVACY AND CONFIDENTIALITY VIOLATIONS
ABA Model Rule 1.6 states, with limited exceptions, “a lawyer shall not reveal information related to the representation of a client unless the client gives informed consent.” OpenAI’s terms of use state that “OpenAI may use Content to provide and maintain the Services, comply with applicable law, and enforce our policies. You are responsible for Content, including for ensuring that it does not violate any applicable law or these Terms.”
This raises serious concerns that putting personally identifiable information into a generative AI input discloses that information to a third party. Without consent, such a disclosure may be found to violate the rules of professional conduct. It could also result in a waiver of attorney-client privilege. Until those issues are resolved, using generative AI creates risks for your clients and for the professional licenses of you and your supervising attorney.
WHAT IS NEXT?
There are serious concerns about using generative AI in a legal practice today. However, lawyers and paralegals cannot ignore technological advances. The American Bar Association unanimously adopted Resolution 604, outlining the appropriate steps for developing AI. That resolution calls on developers to:
-
- Ensure their products are subject to human authority, oversight and control.
- Be accountable for the consequences caused by their products, including taking affirmative steps to mitigate against harm or injury.
- Ensure transparency and traceability by documenting key decisions, procedures and outcomes.
That framework sets the stage for generative AI to transform industry and the economy in a positive manner. To that end, AI will transform the practice of law. Individuals who understand AI will be in great demand in the legal marketplace. However, until all the risks related to using generative AI are known, lawyers and paralegals must be thoughtful and careful in using these new tools.
Mickey Mouse took the sorcerer’s hat and gained great power. However, as Mickey learned, with great power comes the opportunity for disaster. Adopting the power of generative AI requires collaboration between you, your firm, your managing partner, and experts.