An Assessment of the Artificial Intelligence Landscape for Lawyers
This article was originally published by Baton Rouge Bar Association in Volume 6, Issue 6 of The Baton Rouge Lawyer.
“Any AI system that can be used for good can also be used for evil. The more complex an AI system is, the more likely it is to fail. AI systems will always do the unexpected, especially when you least expect it. AI systems will always find a way to break the rules. AI systems will always learn from their mistakes, but they will not always learn the right lessons.”
Bard, a ChatBot[1]
Our quest in this article is to review some key concepts foundational to an awareness of artificial intelligence (A.I.) as it relates to the legal field, to consider some of the potential challenges posed to lawyers and their clients by recent developments in A.I., and to provide some suggestions for spotting and addressing the legal and/or ethical issues raised by those challenges. We hope you find it a useful, if quick, overview of the technology, the impact it will have on all of us and our clients, and some suggestions for equipping ourselves to address foreseeable challenges this rapidly evolving technology introduces.
I. A Perspective on How We Got Here with A.I.
A.I. as a concept has been with us for decades. It dates back at least to 1956, when a small group of ivy-league and business scientists gathered at Dartmouth College for a few weeks to brainstorm on a new concept coined “Artificial intelligence.”[1] Since then, other important technologies have developed, sometimes concurrently, allowing A.I. to become less of a vision, and more of a reality. For example, in 1956 the Internet and the global, electronic data collection it represents did not exist.[2] At the same time, the first transistors had just been invented (and shortly thereafter, integrated circuits) and the microprocessor foundational to what we all now rely upon was not invented until around 1967-1971.[3] Since their invention, modern computer processors have evolved to become more and more powerful, while digital storage capacities have grown exponentially.[4] Neural network software architecture, language processing models and the mathematics of statistics and probabilities that are fundamental touch software programming have continued to evolve. Through the work of generations of software programmers taking advantage of these developments over decades, software systems leveraging more powerful computer processors and analyses of massive, publicly, and freely (if not legally) available data stores for use in automated, machine learning have evolved over the past 10-15 years to a point likely unimaginable in 1956.
Moving forward to 2023, A.I. has become a topic of extreme interest in the private and public sectors, thanks largely to publicity surrounding new generative A.I. product launches, including those like ChatGPT from a company named OpenAI, seeking to provide users initially with a “free,” first-hand experience of the capabilities (and incidental fallibilities) these new A.I. systems now possess. We can attribute recent, converging developments in so-called large language models, accompanying data analytics, and generative A.I. for giving us the developments that have most recently triggered an uptick in A.I.-related media coverage. Companies, governments, and entrepreneurs are now scrambling to determine how to leverage, and how to cope with, the possibilities and threats presented by today’s metamorphosing A.I. systems.
And the future of A.I. is likely to be even more challenging and intriguing. Every lawyer and client should brace themselves for the time when A.I. even reaches a singularity and exhibits sentience: that will be the time when “Asimov Laws” meet “Moore’s Law” and “Murphy’s Law.”[1]
In their current embodiments, most A.I. systems still rely upon three main components:
-
- Data: one or more data sources for analysis and training
- The A.I. Model: one or more software logic/programs/algorithms that, when executed by computers, process and train with the data and generate output when prompted by a user
- User Interface: a display or other visual or aural interactive device that is programmed with software that receives prompts or queries from a user and generates for display or transmission answers or output to said user
In the case of OpenAI’s ChatGPT as well as certain other A.I. systems from Google, Microsoft and others, the user interface is a “chatbot.” Generally speaking, a chatbot is a software component or module that generates a text box for receiving questions or prompts from the user and visually displaying to the user responsive output. Chatbots for A.I. systems are being employed to interface with millions of users while leveraging language and other data processing capabilities to decipher text inputs in the form of prompts from a user and to simulate human communication by delivering prompt-responsive outputs using human-like expressions, usually in text form.
The data used to “train” the A.I. model can come from any source, but typically is the open Internet at present for many of the so-called large language model A.I. systems, e.g., ChatGPT. Unfortunately, this means that both public and private or proprietary information is used for training the A.I. models, raising concerns that the models are being trained using data that were pilfered or scraped from the owners of such data without their permission or consent, and that such unauthorized ingestion of data may be in violation of the rights of many thousands, if not millions, of owners of rights in such data. The data used in large language model A.I. systems are also as of a certain point in time in the past, most recently up to 2021 in the case of ChatGPT. Contrary to popular belief, A.I. models cannot currently use the internet in real time. But that will change in more advanced A.I. progeny. Some A.I. chatbots have now taken the approach of being offensive, rude, and snide to be more human-like.[2]
Moreover, the A.I. models typically are contained in a “black box,” in that the developers of these A.I. systems often treat their algorithms and probability engines as trade secret information that should be shielded from disclosure to users. One glaring exception to this state of affairs is certain A.I. systems produced by Meta, which recently released its A.I. algorithms to certain parts of the public and pledged to make them available for use under an open-source license. [3]
Each of the three common A.I. system components above have drastically improved independently during the explosive growth of the Internet and evolution of computer systems. The development of “machine learning” has also established a software architecture enabling these systems to use calculated probabilities, large data collections and automated feedback to “learn” and “teach” themselves, sometimes without the aid of any human input. The output and any user feedback regarding the same are also “data” that can be recycled or accumulated with the data stores used to train the A.I. system, making it possible for the A.I. model to learn from user feedback, and potentially to learn from the output of other, independently executing A.I. systems. This has caused some to believe that A.I. is or can become a sentient being.[4] Yet, at the same time, there are very public instances of the newest A.I. systems generating erroneous and even non-sensical outputs, sometimes referred to as hallucinations.[5]
Given that Rule 1.1 of the Louisiana Rules of Professional Conduct requires a lawyer to be competent, to the extent A.I. could impact the lawyer’s work product or the interests of a client, the applicable rules of ethics likely require some level of awareness of A.I. technology and its potential impacts on the interests of clients.[6] What are lawyers and their clients to make of these developments that present us with yet another disruptive technology, perhaps one of the greatest to-date? Below we will address a few (but only a few) of the daunting questions posed to the legal and business communities by the current state of A.I., doing so with humility, knowing that today’s guardrails will undoubtedly have to evolve along with the A.I. technology’s influence on our clients and practices.[7]
II. Challenges with Integrating A.I. into the Practice of Law
The American Bar Association, through the work of its Artificial Intelligence Task Force, has recognized that guardrails are needed for the development and deployment of A.I. systems, resulting in the ABA House of Delegates’ adoption of Resolution 604 on February 6, 2023. In it, the ABA urges organizations that design, develop, deploy, and use A.I. systems, as well as governmental agencies that may regulate them, to follow these core guidelines:
a) Developers, integrators, suppliers, and operators (“Developers”) of A.I. systems and capabilities should ensure that their products, services, systems, and capabilities are subject to human authority, oversight, and control;
b) Responsible individuals and organizations should be accountable for the consequences caused by their use of A.I. products, services, systems, and capabilities, including any legally cognizable injury or harm caused by their actions or use of AI systems or capabilities, unless they have taken reasonable measures to mitigate against that harm or injury; and
c) Developers should ensure the transparency and traceability of their AI products, services, systems, and capabilities, while protecting associated intellectual property, by documenting key decisions made with regard to the design and risk of the data sets, procedures, and outcomes underlying their AI products, services, systems and capabilities.
Clearly, there are those in the legal community who are concerned that unbridled development of A.I. systems could bring significant challenges and harm to persons who are the subject of, or rely upon, A.I. system outputs and resulting outcomes. Various judges are certainly amongst those concerned. Some federal courts recently have implemented rule changes requiring a lawyer’s signature on a pleading to reflect a certification that, if A.I. has been used to support pleadings or memorandum to the court, the human lawyer has reviewed, verified and adopted the submitted pleading or memorandum as reflecting legitimate caselaw.[8] It behooves all of us in the legal community to come to grips with this reality, so that we evaluate these systems and their potential impact on our practices and our clients in advance. Meanwhile, our clients facing A.I. related risks are exploring A.I. risk management methods and A.I. insurance products.[9]
A. Understanding the Technology
For humans in a position of authority to oversee and control A.I., they must be able to evaluate the risks of use of A.I. Evaluating risk requires at least a basic foundational understanding of the data these A.I. systems use, how the A.I. systems work to generate output, and ways to control the potential risks presented in the construction, operation and outputs of these A.I. systems. One can be lulled into complacency and a sense of false comfort by those who might downplay how different the newer A.I. systems are when compared to conventional search engines and other basic Internet technologies with which lawyers and their clients have grown familiar over the past two decades. The primary difference in the newer A.I. systems is in the generative and transformer capabilities these systems now possess. The anthropomorphic nature of the interactive dialogue with A.I. systems (ChatBots) can lead lawyers and clients alike to trust A.I. systems at their peril.
Generative A.I. is A.I. that can learn from existing artifacts (training data) to generate commercial scales of new, realistic artifacts that may reflect the characteristics of the training data but do not merely repeat that data. It can produce a variety of novel content, such as images, video, music, speech, text and software code. Generative A.I. systems that incorporate so-called transformer models can also track relationships between different items in sequential data (text, images, video, etc.) to build context and help the system derive meaning from that data. This capability in A.I.-speak is referred to as “attention” or “self-attention.”[10] Newer generative A.I. systems further include the ability to discriminate between fake data the generative component creates and real or realistic data, through use of a classification engine (the discriminator), the combination being referred to as a generative adversarial network (GAN).[11] Such recently developed features of generative A.I. improve the quality and accuracy of the generated outputs, presenting exciting new possibilities while also presenting some material risks.
B. Knowing and Adjusting the Terms and Conditions of Data and A.I. Use
In many cases, systems and services that employ an A.I. model will be sourced from third party vendors of the models, unless the models are developed in-house (less likely) or through a hybrid arrangement where the user’s company provides its own training data to a licensed A.I. model application programming interface (API) and configures a system that only depends upon internal resources and interacts only with internal user prompts, for greater security and greater control of the input to and output from the licensed A.I. models. Under any of these scenarios, the terms and conditions of use of the licensed A.I. models are established by the provider of the A.I. models and must be carefully scrutinized to determine what the customer’s rights and responsibilities will be, and what responsibilities, if any, the provider of the A.I. models will assume
C. Lack of Transparency
Most vendors of A.I. systems view the algorithms, system logic and architecture that are the building blocks of their models as proprietary, trade secret information. In most cases, transparency concerning these building blocks is intentionally lacking, making it difficult for users to really understand how the offered A.I. model or system works, what data the A.I. models use for training and how the models process that data and the prompts received from users.
As noted above, some are seeking to address this problem by laying bare their A.I. systems by offering to make them available as open-source software.[12] Whether others will be willing or technically able to become A.I. savants enough to jump on that bandwagon remains to be seen. Regardless, those seeking to employ a third-party A.I. engine will need to know the source of the engine upon which they rely, determine the terms and conditions of the license granting permission to use and further evaluate the engine’s inner workings in order to understand the data upon which it relies, what it does with such data and how it generates output. Without access to and understanding of such information, it may be impossible to explain to others how the A.I. system works to generates the relied upon output.
III. Issue-spotting for Clients Exposed to or Leveraging A.I.
A. Governance
In much the same way that IT security policies have become a mainstay in modern business, companies also need to have a policy on their own use and development of any system that could be classified as A.I. Various A.I. code of conduct policies have been or are being developed based on specific organizational needs and culture. Other A.I. governance tools are evolving for those who need something more than mere policy pronouncements.
For example, in a manner reminiscent of the cybersecurity governance tools became mainstream over the past 10 years, on January 26, 2023, the National Institute of Standards and Technology (NIST) launched an A.I. Risk Management Framework that provides an evolving framework establishing voluntary A.I. governance systems that can be applied in a variety of businesses and sectors.[13] This framework incorporates, amongst other things, recommendations and procedures for developing A.I. impact assessments, regular monitoring of A.I.-derived outcomes, A.I. audit trails and other protocols intended to ensure transparency, reliability, regulatory compliance and accountability through self-assessment and correction. In many circumstances, active board-level participation and oversight should be expected.
B. Contract and Vendor Management
A.I.-powered contract and vendor management systems leverage the power of A.I. and machine learning to streamline the contract management process. These A.I. systems can automate repetitive tasks, reduce errors, and provide insights that can help businesses make informed decisions. One of the biggest challenges with A.I. is that lawyers and their clients have little or no understanding of the data that sits behind it, how A.I. is trained or how it behaves in certain situations. This is where the danger lurks—the trust, uncertainty, and the inability to validate A.I.-generated responses. Those who rely on third party vendors for essential products and services will need to know if those products or services are generated using A.I., and if so, how the A.I. is trained and the data it ingests, how the outcomes are generated through the employed A.I. and what commitments the vendor will make to assist the customer in making necessary changes to the way the A.I. operates to ensure equitable outcomes, transparency and accountability.
Additionally, contracts, clickwrap agreements, web site and mobile application terms of use and the like should be reviewed with A.I. and data scraping in mind, to assess whether such agreements should specifically include prohibitions on certain data scraping or data mining activities on an organization’s public Internet web site resources, especially if there could be any personal or proprietary information contained in or inferred from compilations of such data.
C. Human Resource Management Practices
In the human resources realm, A.I. has been used to conduct phone interviews and screen candidates and, without appropriately correcting for biases, could be subject to preferring certain voice inflections and response times associated with gender, race, national origin, age, or disability. A.I. also can pose a risk to employee privacy if not implemented correctly.
In that regard, the Equal Employment Opportunity Commission (EEOC) is concerned. It released a technical assistance document, “Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964,” which is focused on preventing discrimination against job seekers and workers. The document explains the application of key established aspects of Title VII of the Civil Rights Act (Title VII) to an employer’s use of automated systems, including those that incorporate A.I.[14] The EEOC has already addressed one A.I. case involving employment discrimination.[15] Clearly, the risks associated with deployment of A.I. systems in the human resources context carries with it risks to be evaluated and mitigated.
D. New Product Development
Does your firm, company or client intend to develop a new product or service that may rely on third party’s A.I. application to process client or company data or inputs to generate outputs delivered to others or relied on to create a deliverable work product? If so, to what extent does the use of such A.I. system remove the human element from the authorship or inventorship determination? Will the “inventor” or “author” be the A.I. system? And if so, can the output be protected under current intellectual property laws? The Copyright Office and the U.S. Patent and Trademark Office and the courts currently appear unified in the assessment that such output to the extent solely generated by A.I. systems would not be eligible for patent or copyright protection.[16] We suspect thorny issues around co-inventorship and co-authorship will remain for some time to come, until there are definitive court rulings or legislative developments.
E. IP Infringement Risks
In the context of infringement of third-party intellectual property rights, the use of A.I. may generate output from training data that comprised the works of authorship of others, raising the question of whether the output constitutes a “copy” or “derivative work” of the original works so as to constitute a copyright infringement. As is the case with use of any innovative technology, it will be important to determine whether your organization’s or your client’s use of A.I. involves using, processing or distribution of any of the content, personally identifying information, images or likenesses of others, potentially without the express consent of the involved data subjects. It will also be important to determine if the systems used or the products produced therefrom could involve innovations that are the subject of patent or other forms of intellectual property protection. And it would be wise to assess whether vendors of A.I. systems agreed to indemnify you or your clients in the event use of the vendor A.I. outputs or A.I. systems is accused of having infringed another’s intellectual property rights.
F. Data Privacy
Do you or your client plan to use an A.I. system that will collection, use, store or process any personal data of an identifiable individual or group of identifiable individuals? It is likely that such collection, use, storage, or processing will be regulated in one way or another, by a privacy law now on the books or enacted in the near future. At least 12 states now have passed comprehensive privacy legislation to regulate processing of such personal information in various circumstances.[17] Several foreign countries and the EU now have comprehensive privacy laws that regulate how you process such information of persons located in those jurisdictions.
G. Legislation and Laws Dealing with A.I.
Not surprisingly, A.I. has become an emerging issue in law and courts. Many states have launched laws seeking to regulate the use of A.I.[18] The European Union has preliminarily approved draft legislation that would purport to regulate A.I., with extraterritorial effect. These A.I.-related laws require study and comprehension in dealing with day-to-day legal practice as well.
The courts are now confronting questions about how A.I. may influence traditional notions of authorship, inventorship, and ownership. It is not unreasonable to expect that this is just the tip of a forthcoming legal iceberg, given the plethora of anticipated future applications and uses, for good and for bad, of A.I.
IV. A.I.’s Impact on Legal Ethics and Professionalism-- Cave intelligenti artificialis
The sordid example of the two New York attorneys who relied on ChatGPT in a legal proceeding is the first but not the only cautionary tale regarding A.I. and professional ethics.[19] Attorneys and those in the legal profession will not be allowed “ignorance of A.I.” as a valid defense.[20] The benefits of A.I. in one’s practice can never override the ordinary aspects of diligence, professionalism, and responsibilities that go with the legal profession. Rule 1.1 of the Rules of Professional Conduct is likely to require a baseline level of competence regarding the use of A.I. This is true for A.I. just as it is true for the Internet, email and other forms of information technology prevalent in law and business today.
For those who believe A.I. is a matter of “yes or no” choice, they should think again. The A.I. cat is already out of the bag.[21] We as attorneys and legal professionals will never have mastery over A.I. or completely understand it, much less control it. We can only choose how to use, react, and address it. That requires generating values, procedures, and practices, which will be purposed to deal with A.I. issues, perquisites, and pitfalls.
Attorneys and law firms now have no choice but to adopt a meaningful A.I. code of ethics or values to deal with the A.I. juggernaut.[22] We also should realize that such A.I. code of ethics will become obsolete very quickly without constant dialogue and discussion.
For one thing about A.I. is clear. Whether or not A.I. is or can become sentient, A.I. can ensorcel attorneys and those in the legal profession. Some may be tempted to anthropomorphize A.I. and get a sense of false comfort that will become our undoing. We are already faced with the integration and interface of humans and A.I. We have perhaps already become Artificialis intelligentia utens homines. If so, we need to find out what that means and address it proactively and cautiously.
V. A Lawyer’s Abbreviated Glossary of A.I. Lingo
Artificial Intelligence – An engineered machine system, typically software executing on one or more computer processors, for processing and analyzing data and generating outputs in a manner that seeks to emulate human intelligence.
Chatbot – A.I. designed to provide a user interface with an artificial intelligence application in which the interface output simulates human-like conversation or interaction leveraging natural language processing techniques to comprehend and respond to human input via text or other input means.
Deep Learning – A field within Machine Learning using artificial neural networks to perform multiple phases of processing to extract progressively more sophisticated attributes from data.
Generative A.I. – A type of A.I. that trains machine learning models on large data collections to generate new outputs or content, e.g., text, code, images, videos, music and the like, typically based upon user input or prompts.
Generative Pre-trained Transformer (GPT) – A kind of generative large language model pre-trained with a massive amount of diverse text data and discriminatively fine-tuned to focus on specific tasks.
Hallucination – In the context of A.I., when generative A.I. creates outputs that contradict the base data or convey factually incorrect information as if it were fact.
Inference – A machine learning process carried out by trained A.I. model, for making predictions or decisions based upon input.
Ingestion – In the context of A.I., the reception and processing of data by a computer system, typically a computer system operating an artificial intelligence application or program.
Large Language Model (LLM) – A.I. that uses deep learning techniques to make a model trained on massive amounts of text to discern patterns and relationships in text characters, words and phrases. The two types are generative LLMs (that make text predictions based on probabilities of word sequences discerned from the training) and discriminative LLMs (that make classification predictions based on probabilities of data features and weights discerned from the training).
Machine Learning – A type of A.I. model that represents underlying patterns or relationships within a training data collection once an algorithm is applied to that collection, so that it can be used to make predictions from, and perform tasks on, new data.
Natural Language Processing – A type of processing of language or speech that allows a computer to interpret and manipulate language to understand its meaning, assess sentiment and evaluate its importance.
Neural Networks – Software models employed in machine learning to mimic how neurons interact with various processing layers and at least one hidden layer, to thereby enable modeling of complex associations or patterns in data.
Scraping – The act of finding and collecting data for ingestion from publicly accessible Internet web pages and other data sources connected to a computer network such as the Internet.
Transformer – A type of neural network that learns context and meaning by following relationships in sequential data (e.g., words in sentences). These neural networks apply evolving sets of mathematical processes (called “attention” or “self-attention”) to discern ways sometimes seemingly unrelated data in a series are influenced by, or are interdependent with, each other.
[1] See Angelo Ovidi, “Rewriting Asimov (and Murphy) Laws for AI”, found at https://www.linkedin.com/pulse/rewriting-asimov-murphy-laws-ai-angelo-ovidi-mbcs
[2] Is Google's Chatbot Sentient? No, and Here's Why, https://www.haaretz.com/israel-news/tech-news/2022-07-03/ty-article/.premium/is-googles-chatbot-sentient-no-and-heres-why/00000181-c3e4-dcfd-a797-dbee545a0000;
The Chatbots Are Here, and the Internet Industry Is in a Tizzy, https://www.nytimes.com/2023/03/08/technology/chatbots-disrupt-internet-industry.html; Why Chatbots Sometimes Act Weird and Spout Nonsense, https://www.nytimes.com/2023/02/16/technology/chatbots-explained.html ; Google Engineer Claims AI Chatbot Is Sentient: Why That Matters, https://www.scientificamerican.com/article/google-engineer-claims-ai-chatbot-is-sentient-why-that-matters/; Why Do A.I. Chatbots Tell Lies and Act Weird? Look in the Mirror, https://www.nytimes.com/2023/02/26/technology/ai-chatbot-information-truth.html.
[3] See, Metz, C. and Isaach, M., In Battle Over A.I., Meta Decides to Give Away Its Crown Jewels, The New York Times, May 18, 2023, https://www.nytimes.com/2023/05/18/technology/ai-meta-open-source.html (last viewed Aug. 23, 2023).
[4] Leonardo De Cosmo, Google Engineer Claims AI Chatbot Is Sentient: Why That Matters, Scientific American, July 12, 2022, https://www.scientificamerican.com/article/google-engineer-claims-ai-chatbot-is-sentient-why-that-matters/ (last viewed Aug. 23, 2023).
[5] Thorbecke, C., AI tools make things up a lot, and that’s a huge problem, Aug. 29, 2023, https://www.cnn.com/2023/08/29/tech/ai-chatbot-hallucinations (last viewed Aug. 31, 2023).
[6] See Louisiana State Bar Association, Public Opinion 19-RPCC-0211, February 6th, 2019 (Lawyer’s Use of Technology).
[7] See Vipin Bharathan, “Guardrails For AI, What Is Possible Today” found at https://www.forbes.com/sites/vipinbharathan/2023/06/25/guardrails-for-ai-what-is-possible-today/?sh=62f29e1e3a0d
[8] See, Judge Brantley Starr, North District of Texas, Mandatory Certification Regarding Generative Artificial Intelligence, located at https://www.txnd.uscourts.gov/judge/judge-brantley-starr (last viewed Aug. 23, 2023).
[9] See some illustrative A.I. insurance related products at https://themhpgroup.com/ai-insurance/?gclid=Cj0KCQjw9MCnBhCYARIsAB1WQVXbKS1tizou8ZeDoU5qmYvThxfRMHI_cRyun05niIMsKJZcQNP1L6saAovjEALw_wcB “AI Insurance can provide coverage for a wide range of scenarios, including: 1. Liability Risks: As AI technologies are adopted across various industries, they may inadvertently cause harm, errors, or accidents. AI Insurance covers liability arising from AI system malfunctions or failures, protecting you against potential legal claims. 2. Cybersecurity Risks: AI applications often handle vast amounts of data, making them targets for cyberattacks. AI Insurance covers losses related to data breaches, hacks, and cyber threats that can compromise the integrity and confidentiality of your AI systems. 3. Intellectual Property Risks: In the competitive field of AI development, protecting your intellectual property is crucial. AI Insurance covers potential legal expenses and damages related to patent infringement, copyright violations, and other IP disputes. 4. Business Interruption Risks: AI systems are critical to the operations of many organizations. AI Insurance provides coverage for losses from AI-related business interruptions, ensuring that your organization can continue to function even in the face of disruptions. 5. Ethical and Regulatory Risks: The use of AI technologies may raise ethical and regulatory concerns, especially in areas such as data privacy, bias, and transparency. AI Insurance covers regulatory fines, penalties, and legal expenses arising from non-compliance with laws and regulations governing AI technologies.”
[10] Merritt, R., What is a Transformer Model?, March 22, 2022, https://blogs.nvidia.com/blog/2022/03/25/what-is-a-transformer-model/ (last visited Aug. 31, 2023).
[11] Generative adversarial network, footnote 3, https://en.wikipedia.org/wiki/Generative_adversarial_network (last visited Aug. 31, 2023) (citing Goodfellow, Ian; Pouget-Abadie, Jean; Mirza, Mehdi; Xu, Bing; Warde-Farley, David; Ozair, Sherjil; Courville, Aaron; Bengio, Yoshua, Generative Adversarial Nets, 2014, Proceedings of the International Conference on Neural Information Processing Systems (NIPS 2014). pp. 2672–2680).
[12] See supra, note 8.
[13] AI Risk Management Framework, https://www.nist.gov/itl/ai-risk-management-framework (last accessed August 28, 2023).
[14] See, EEOC Releases New Resource on Artificial Intelligence and Title VII
Outlines Considerations for Incorporating Automated Systems into Employment Decisions, May 18, 2023, https://www.eeoc.gov/newsroom/eeoc-releases-new-resource-artificial-intelligence-and-title-vii (last visited Aug. 23, 2023).
[15] See Burgo, R. and Hughes, W., EEOC Settles First-Ever AI Discrimination Lawsuit,
August 17, 2023, https://www.shrm.org/resourcesandtools/legal-and-compliance/employment-law/pages/eeoc-settles-ai-discrimination-lawsuit.aspx#:~:text=The%20Equal%20Employment%20Opportunity%20Commission's,55%20and%20men%20over%2060 (last viewed August 23, 2023).
[16] Thaler v. Perlmutter, No. 1:22-cv-01564 (D.DC) (8/18/23) (holding AI created artwork ineligible for lacking human author); Thaler v. Vidal, 43 F.4th 1207 (Fed. Cir. 2022), cert denied, __ U.S. ___, 143 S.Ct. 1783, 215 L.Ed.2d 671 (2023) (affirming decision that inventor must be a human in order to receive patent protection for invention).
[17] Desai, A., US State Privacy Legislation Tracker, 4 Aug. 2023, https://iapp.org/resources/article/us-state-privacy-legislation-tracker/ (last visited Aug. 31, 2023).
[18] See US State-by-State AI Legislation Snapshot, undated, https://www.bclplaw.com/en-US/events-insights-news/2023-state-by-state-artificial-intelligence-legislation-snapshot.html (last visited Aug. 31, 2023).
[19] See Merken, S., New York lawyers sanctioned for using fake ChatGPT cases in legal brief, June 22, 2023, https://www.reuters.com/legal/new-york-lawyers-sanctioned-using-fake-chatgpt-cases-legal-brief-2023-06-22/ (last viewed Aug. 23, 2023).
[20] See Weiser, B. and Schweber, N., The ChatGPT Lawyer Explains Himself
https://www.nytimes.com/2023/06/08/nyregion/lawyer-chatgpt-sanctions.html (last viewed Aug. 23, 2023) (“In a cringe-inducing court hearing, a lawyer who relied on A.I. to craft a motion full of made-up case law said he “did not comprehend” that the chat bot could lead him astray”).
[21] See 'The Cat Is Out of the Bag': As DALL-E Becomes Public, the Possibilities — and Pitfalls — of AI Imagery
Mina KimSarah Mohamad, Sep 26, 2022. See https://www.kqed.org/news/11926565/the-cat-is-out-of-the-bag-the-possibilities-and-pitfalls-of-ai-imagery (last accessed Aug. 31, 2023).
[22] Lawton, G. and Wigmore, I., AI ethics (AI code of ethics), https://www.techtarget.com/whatis/definition/AI-code-of-ethics (last accessed Aug. 31, 2023).
[1] Artificial Intelligence Coined at Dartmouth, https://home.dartmouth.edu/about/artificial-intelligence-ai-coined-dartmouth (last viewed Aug. 13, 2023).
[2] The Internet was born out of a series of inventions relating to Transmission Control Protocol (TCP)/Internet Protocol(IP), the Domain Naming System (DNS) and the World Wide Web, between 1974 and 1989. See, A Short History of the Internet, Dec. 3, 2020, https://www.scienceandmediamuseum.org.uk/objects-and-stories/short-history-internet (last accessed Aug. 22, 2023).
[3] Shirriff, K., The Surprising Story of the Microprocessor, Aug. 30, 2016, https://spectrum.ieee.org/the-surprising-story-of-the-first-microprocessors (last accessed Aug. 22, 2023).
[4] Phiri, M., Exponential Growth of Data, Nov. 19, 2022, https://medium.com/@mwaliph/exponential-growth-of-data-2f53df89124 (last accessed Aug. 22, 2023).