Authored By: Danielle Christine Bengesai
University of Johannesburg
Introduction:
The earliest example of Artificial Intelligence (AI) can be dated back to 1950 when Claude Shannon, an American mathematician, created a remote-controlled mouse. Since then, the computer system has improved at a concerningly fast pace, exceeding the expectations of many scholars and experts.
Over the last few years, creators such as Elon Musk and Bill Gates have even voiced concerns over AI eventually marking the end of the human race.[1]This article assesses the risks that follow the inevitable infiltration of AI in legal practice and mitigating factors that could be considered to protect the future of the legal profession.
Background:
Artificial intelligence is a term used for computer systems created to perform complex tasks normally completed by human reasoning, decision-making, creating etc.[2]Over the years, it has become a stronghold for research and performative actions to such an extent that it has become a substitute for search engines for the many using such assistive AI tools.It succeeds in playing its role through output systems such as ChatGPT, Gemini, Notebook LM and so many more.
While AI comes bearing many benefits that have since changed the research and search engine space, many have foregone ordinary research mechanisms for the much easier and more accessible tool that is the AI computer system. It is now slowly becoming a workplace essential, promising the kind of financial relief many employers dream of today. This, however, has not come with few complaints as AI tools threaten the very essence of employment and the many principles that underlie the human workforce.
MAIN BODY:
The following sections follow an assessment of the risks that come with using AI in the legal field.
Section 1: Misinformation
Despite the widespread trust and reliability many have placed on artificial intelligence, the legitimacy of all the information provided by the computer system has always remained speculative. Systems such as ChatGPT have, on many occasions, been put to the test and have failed as they have either provided false information, omitted some data, mixed truth and fiction, or even made up information.Thus, it has become devastatingly clear that information provided by AI tools may at times be inaccurate, incomplete, misleading or out of date.[3]
An example of such falsification is found in the South African case, Mavundla v MEC Department of Co-operative Government and Traditional Affairs.[4] The applicant, Mavundla, was a legal professional who submitted false or irrelevant legal citations in a court of law. This was following his use of an AI tool to draft their client’s heads of argument. This case posed an ethical dilemma as the judge made it clear that Mavundla had been grossly negligent and had failed to act ethically as a legal professional.
Mavundla was then referred to the Legal Practice Council (LPC) for assessment and possibly, the onset of disciplinary action.This case highlighted the poor quality of education regarding the use of such computer systems as he insisted that he was not aware AI tools were capable of falsifying information. Artificial intelligence can be just as misleading as it can be effective. The law is held up by fundamental aspects such in research and legal certainty. Therefore, the tainting of such foundational principles by misinformation poses a threat to the quality of modern legal education as a whole.
A similar case often discussed in conjunction with the Mavundla case is the Roberto Mata v Avianca Airlines case.[5] The case followed a passenger who claimed for damages against the airline company, Avianca Inc. However, the passenger’s lawyer filed the complaint citing judicial authority the opposition could not locate. He ended up admitting that he had used ChatGPT as a search tool and he had not been aware that the AI tool would fabricate the provided sources. The court found that he had acted with subjective bad faith.
Both these cases highlight the conditional reliability of artificial intelligence tools have when it comes to legal research.
Section 2: Confidentiality breaches
The development of artificial intelligence is highly dependant on its collection of data. Every action and response is a reflection of the high quality data it is fed.
In Mutnick v Clearview AI,[6] the plaintiff challenged Clearview AI’s creation of a facial recognition database that relied on the blueprint of more than 3 billion photos scraped from online social media platforms without consensual agreement to the capture nor use of those photos from the creators.In the Dinerstein v Google case,[7] the plaintiffs sought an action against Google for its use of ill-retrieved healthcare data to train diagnostic and search algorithms for the company’s own profit.
These cases highlight the growing demand for protection and regulation when it comes to the development of AI systems. Many products are growing reliant on AI usage and if regulation is not sought, cases such as the abovementioned could increase in number in the near future.
Fortunately, there has been an increase in proactivity from various legal systems and jurisdictions taking action either through legislation, or through precedent. An example of such regulation is found through the European Union’s General Data Protection Regulation (GDPR) which aims to safeguard data protection in European law.[8] However, there still seems to be a gap in such provisions which does not cater to legal professionals in the field.
If the use of AI in the law becomes the product of investment, it would mean that private data would inevitably be compromised. This means that if legal professionals were to rely on AI tools for research or drafting purposes, they would have to turn personal and private client data into prompts which would risk the protection of such data as it would be used for future responses by those same AI tools. Therefore, AI would pose a threat to legal confidentiality should it be integrated into legal practice.
Section 3: The obsoletion of empathy
Finally, legal practice succeeds in the manner that it does because it is based on both objective and subjective factors. The law can only be what it is with certainty because of what the many often think it should be, principally. This means to say that it works because it is developed by expert victims and students of the human experience, legal professionals and subjects of the law.
Human judgement is often contextualised and can be questioned as opposed to AI outputs that often add an unjust veneer of objectivity to the conversation.[9] Many often argue that AI’s processing capacity makes up for its lack in cognitive function. However, in a system built on making the human experience fair, AI’s ability to outthink humans should not be placed above its inability to understand because speed will never make up for the quality of empathy in a court of law.[10]
The law is stringent, but just. And only those who are subject to it, should be able to interpret it. Therefore, the question that remains then relates to whether or not such tools can still be integrated into the legal system despite such a shortcoming.Automated empathy has been a development many AI output programmers have worked towards. Most researchers hold the sentiment that such a development may take a very long time to ensue but that when it does, it may drive positive change in modern society.[11] Therefore, this risk may be addressed in the near future but for now, these factors should still eb considered carefully.
Discussion: Mitigation
However, beyond the risks that prevail on the condition that AI is integrated into the law, there is no denying that the inevitability of this integration must be considered.
With millions losing jobs to AI, employers are quickly realising the benefits that come with a technology-infused work environment. Thus, the legal sector is just as quickly progressing to be one of the many fields that will advance to be AI-inclusive as the benefits of it far outweigh the doubts that have arisen.
This section will discuss the ways in which AI can be integrated into the legal field ethically while mitigating some of the abovementioned concerns.
Ethical use:
- The best way to fairly integrate AI into the legal field is to adjust the norms and ethics surrounding the conversation.
- This entails publishing more legislation and guidelines that set an undisputed standard of practice in the exercise of AI tools in the legal sector. This means enforcing what legal professionals can and cannot use AI computer systems for and how they can maintain the general standard of ethics in doing so.
Introducing AI in Legal Education:
- Cases such as the Mavundla case and the Mata v Avianca Inc. case have proven that there is a gap that has formed between legal research skills and the functions of AI tools. A way in which this can be mitigated is by introducing themes of the Fourth Industrial Revolution and Artificial Intelligence into legal education to train young legal professionals in using AI as assistive tools and not substitutes for research.
- Information literacy is an essential skill that presents itself across multiple disciplines today. Including information literacy and artificial intelligence use in education may be the best way we can prepare the next generation of legal professionals for the modern work environment.
Fact-checking and research mechanisms:
- The final adjustment that can be made is the provision of fact-checking software that goes hand-in-hand with AI tools. Such a provision would safeguard professionals from misinformation at the hand of AI computer systems and false information.
- Furthermore, the creation of research mechanisms that work with legal databases and bridge the gap between modern and traditional research skills would allow the law to develop with growing professionals would assist with making AI tools assistive and thus, ethical for a brighter future in the field of law.
Conclusion:
Although AI represents the development of society and changes that once seemed impossible, it has become a source of problems we once deemed unimaginable. Its contribution to the legal profession is nothing short of that. The risks we discuss today are broad and stretch from ethical dilemmas such as falsifying responses to prompts, breaching confidentiality and the obsoletion of empathy if AI is successfully integrated in the legal profession. However, such a change in the legal sector is inevitable and should, instead, be accepted broadly. The mitigating factors that can be considered in bridging the gap between traditional research mechanisms and AI tools as mentioned above include developing legislation to regulate the ethical and sustainable practice of the law, the introduction of AI in legal education, and the creation of fact-checking mechanisms to compliment AI tools. By regulating AI use and the education surrounding them, the legal profession can retain its certainty and ethics against the backdrop of a changing world.
Reference(S):
Blogs and Websites:
- Emily Dorotheau, ‘Reap the benefits and avoid the legal uncertainty: who owns the creations of artificial intelligence?’ (Computer and Telecommunications Law Review).
- Abigail Bowman, ‘What is Artificial Intelligence?’ (NASA, 13 May 2024) <What is Artificial Intelligence? – NASA> accessed 9 July 2025.
- LexisNexis, ‘Balancing AI and Human Judgment: Ethical Considerations in the Legal Profession’ (LexisNexis Blog, 11 March 2024) <https://www.lexisnexis.com/blogs/za/b/legal/posts/balancing-ai-and-human-judgment-ethical-considerations-in-the-legal-profession> accessed 9 July 2025.
- John Notsa, ‘Is Empathy the Missing Link in AI’s cognitive function?’ (Psychology Today, October 19 2024) < Is Empathy the Missing Link in AI’s Cognitive Function? | Psychology Today> accessed 9 July 2025.
- Tom Fleishman, ‘AI Generated Empathy Has Its Limits’ (Cornell Chronicles, May 8 2024) < AI-generated empathy has its limits | Cornell Chronicle> accessed 9 July 2025.
Cases:
- Mavundla v MEC Director of Co-Operative Government and Traditional Affairs, KwaZulu Natal and Others [2025] JOL 68108.
- Matav Avianca Inc [2023] unreported (SDNY) (D (US)).
- Mutnick v Clearview AI, et al. [2020] 1:20-cv-00512 (N.D. Ill.).
- Dinertein v Google [2019] 1:19-cv-04311 (N.D. Ill).
Statutes and Statutory Instruments:
- Judiciary of England and Wales, Refreshed AI Guidance: Judicial Use of Artificial Intelligence Tools (April 2025)
- General Data Protection Regulation 2016/679, 2016 O.J. (L 119) 1.
[1] Emily Dorotheau, ‘Reap the benefits and avoid the legal uncertainty: who owns the creations of artificial intelligence?’ (Computer and Telecommunications Law Review) accessed 9 July 2025.
[2] Abigail Bowman, ‘What is Artificial Intelligence?’ (NASA, 13 May 2024) <What is Artificial Intelligence? – NASA> accessed 9 July 2025.
[3] Judiciary of England and Wales, Refreshed AI Guidance: Judicial Use of Artificial Intelligence Tools (April 2025)
[4] Mavundla v MEC Director of Co-Operative Government and Traditional Affairs, KwaZulu Natal and Others [2025] JOL 68108
[5] Mata v Avianca Inc [2023] unreported (SDNY) (D (US)).
[6] Mutnick v Clearview AI, et al. [2020] 1:20-cv-00512 (N.D. Ill.).
[7] Dinertein v Google [2019] 1:19-cv-04311 (N.D. Ill).
[8] General Data Protection Regulation 2016/679, 2016 O.J. (L 119) 1.
[9] LexisNexis, ‘Balancing AI and Human Judgment: Ethical Considerations in the Legal Profession’ (LexisNexis Blog, 11 March 2024) https://www.lexisnexis.com/blogs/za/b/legal/posts/balancing-ai-and-human-judgment-ethical-considerations-in-the-legal-profession accessed 9 July 2025.
[10] John Notsa, ‘Is Empathy the Missing Link in AI’s cognitive function?’ (Psychology Today, October 19 2024) < Is Empathy the Missing Link in AI’s Cognitive Function? | Psychology Today> accessed 9 July 2025.
[11] Tom Fleishman, ‘AI Generated Empathy Has Its Limits’ (Cornell Chronicles, May 8 2024) < AI-generated empathy has its limits | Cornell Chronicle> accessed 9 July 2025.