Authored By: Lesego Motloenya
Boston City Campus Business College
ABSTRACT
Globally, artificial intelligence is influencing the practice of law more and more. AI’s future in South Africa’s legal system is both exciting and uncertain. AI can improve the accessibility and efficiency of legal procedures, but its application needs to be carefully thought out to prevent unforeseen consequences. This article examines AI ethics in the legal field as well as responsible AI use. AI has developed quickly and is now acknowledged as a general-purpose technology, similar to the internet or electricity, because of its broad applications in many different fields and its capacity to revolutionise economies and societies. The National AI Policy Framework for South Africa is a strategic plan designed to use AI technologies to advance the nation’s technological development, economic expansion, and social well-being. The framework, which places a strong emphasis on ethical development, gives responsible AI deployment that is consistent with South Africa’s values and priorities top priority.
INTRODUCTION
In the future, artificial intelligence (AI) will play a bigger part in the legal industry. AI’s potential impact on the legal system could include improved analytical skills, more accurate court rulings, automated legal advisory services, and much more. It is anticipated that these developments will result in the legal sector operating more quickly and effectively. With recent court cases highlighting both AI’s potential and risks, the question of whether it can interact with South African legal principles has gained significant attention. When a self-represented litigant used AI-generated submissions that were praised for their quality, the Western Cape High Court acknowledged the value of AI tools in Makunga v. Barlequins Beleggings. On the other hand, the Johannesburg Regional Court warned against the perils of AI-generated disinformation in Parker v. Forsyth, scolding attorneys for submitting fictitious case law that was derived from AI tools. The court emphasized that human diligence in legal research should not be replaced by AI. This idea was also reaffirmed in Mavundla v. MEC: Department of Co-Operative Government and Traditional Affairs KZN, which emphasised that legal professionals are ultimately in charge of confirming AI-generated results.
MAIN BODY
The Responsible Way To Use AI:
With AI’s assistance, attorneys and law enforcement organizations can analyse vast databases, leading to quicker and more accurate decision-making. By examining earlier court rulings, related cases, and their results, AI helps make decisions in new cases. As a result, it is anticipated that AI will greatly boost productivity by saving time, especially by automating necessary, repetitive tasks that currently account for a significant amount of a professional’s workload. The National Artificial Intelligence Policy Framework’s first phase in South Africa’s development lays the groundwork for a time when AI will be used responsibly and successfully to drive digital transformation and foster inclusive growth throughout the nation. Legal professionals should always verify the authenticity and applicability of any source, regardless of how persuasively an AI tool presents it. To avoid misrepresenting the law or citing cases that don’t exist, they must read the original rulings. This ruling ought to spark discussions about the most effective ways to incorporate AI into a precise legal framework.
The Ethics of AI:
There are several new ethical and legal issues that arise with the advancement and use of artificial intelligence (AI). The goal of the ethical concerns surrounding AI is to make sure that the development and application of this technology adhere to the values of equality, justice, human rights, and social responsibility. AI must be developed and used in a way that safeguards human rights, freedoms, and well-being. It’s critical that AI promotes human social and economic advancement rather than harm. By offering standards for its moral use, AI ethics helps to create AI policy for legal applications. Concerns regarding AI technologies are also covered. It also guarantees ethical AI development that conforms to regulatory requirements. To guarantee the ethical application of AI, numerous nations and international organizations are creating rules and regulations. For instance, the Artificial Intelligence Initiative Act and other proposed laws in the US are meant to guarantee the responsible and open application of AI. At the same time, China and other nations are enforcing laws against ethical transgressions in specific AI fields, such social monitoring and biometric data use. The goal of ethical development and deployment is to guarantee that AI systems are created and applied with moral principles at the forefront, tackling problems like accountability, transparency, prejudice, and fairness. Making sure AI is used ethically and eliminating bias in AI are two important ethical issues in the legal domain. When practicing law, legal practitioners must also abide by ethical norms. They must carefully handle these difficulties in order to preserve integrity and confidence.
LEGAL FRAMEWORK
A first step toward creating the National AI Policy, the National Artificial Intelligence (AI) Policy Framework for South Africa seeks to advance the integration of AI technologies to boost economic growth, improve societal well-being, and establish South Africa as a pioneer in AI innovation. The main goal of the policy framework is to strategically promote a strong AI ecosystem by coordinating efforts in talent development, infrastructure improvement, and research and development. The policy framework places a strong emphasis on the value of human-centered AI, making sure that AI tools support human judgment rather than take its place. The framework guarantees that AI development is in line with ethical and societal issues by upholding professional accountability and advancing human values. The framework supports AI companies, encourages AI education and training programs, and facilitates public-private partnerships to fulfill the need for economic development and capacity building. It also contains steps to strengthen cybersecurity and defend AI systems against malevolent attacks. By giving these sectors top priority, the framework hopes to foster an atmosphere that is favorable to AI research, guaranteeing that the financial advantages of AI are shared widely and contribute to the country’s overall development. In order to direct the responsible and moral development, application, and deployment of artificial intelligence in all spheres of society, South Africa must create a national AI policy. As AI technologies rapidly advance, they offer unprecedented opportunities for economic growth, improved public services, and enhanced quality of life.
In response to two recent court decisions in which artificial intelligence-generated case law was submitted as precedent, the Legal Practice Council (LPC) of South Africa is creating an AI policy to govern the use of AI by attorneys. Northbound Processing v. SA Diamond Regulator and Mavundlela v. KZN MEC for Cooperative Governance were submitted to the LPC because attorneys used nonexistent rulings in their papers. This is seen as a major violation of conduct by the LPC, which oversees lawyers, advocates, and aspiring legal professionals. Since precedent is a key component of South Africa’s judicial system, any abuse of AI is particularly worrisome. Academic institutions, which are also battling the effects of generative AI on evaluation and research integrity, will be consulted by the LPC.
JUDICIAL INTERPRETATION
The Makunga v Barlequins Beleggings (Pty) Ltd t/a Indigo Spur (WCC) (unreported case no 19733/2017, 1-12-2023) (Bishop AJ) [1]case in the Western Cape Division High Court highlights the potential advantages of AI for improving access to justice. The self-represented litigant, Mr judge, Bishop AJ, praised the quality of his submissions, even remarking that some members of the Bar had submitted arguments worse than those generated by AI. This case shows that AI can strengthen legal argumentation for self-represented litigants, thus promoting justice access. It also reflects the dual nature of AI in legal practice: it can boost efficiency and accessibility, but if misused, it can cause serious ethical and professional breaches. Recognising these dangers, we have proposed a set of Ethics Guidelines for Legal Practitioners in South Africa regarding the Use of Generative AI. These guidelines establish clear standards for responsible AI use, ensuring that legal practitioners employ AI ethically, transparently, and diligently. Bezuidenhout J in the Mavundla case experimented with ChatGPT by submitting several questions related to the content of certain cases. The purpose was to assess the accuracy of the responses and found that the information given was blatantly incorrect. The conclusion from this experiment was that ChatGPT (and arguably AI tools generally) are unreliable “as a source of information and legal research”. In fact, Bezuidenhout J stated that “in my view, relying on AI technologies when doing legal research is irresponsible and downright unprofessional”, clearly expressing the court’s disapproval of using AI for legal research.[2]
Conversely, the Johannesburg Regional Court in Parker v Forsyth NNO and Others [2023] ZAGPRD 1 highlighted the risks of uncritical reliance on AI. The court reprimanded attorneys who submitted fabricated legal authorities generated by ChatGPT, emphasising that practitioners must independently verify all sources. It is becoming increasingly clear that, while AI can be a powerful tool, it cannot substitute professional diligence and ethical responsibility. [3]
CRITICAL ANALYSIS
Given the current level of the technology, generative AI can be useful for routine tasks like summarizing legal texts, improving language, or transcribing interviews, but its application in complex legal thinking or applying the law to particular situations is still troublesome. AI can be used in legal services for activities including contract drafting, document analysis, and court case analysis and summarization. AI must be able to recognize legal ideas, apply them precisely, and reference case law with reliability if it is to be employed in legal study and practice. Particularly in the well-established domains of delict and undue enrichment, AI models demonstrate noteworthy proficiency in recognizing and implementing South African private law principles. The ability of most models to correctly classify legal actions such as actio de pauperie and actio legis Aquiliae suggests that AI can competently navigate common-law-based claims. However, their struggle with more specialised doctrines, such as negotiorum gestio, highlights the unevenness of their legal reasoning. Ultimately, while AI presents promising applications in legal research and analysis, its current limitations reinforce the irreplaceable role of human legal expertise. AI tools can support legal practitioners and scholars in structuring arguments and retrieving information, but their inconsistent engagement with statutes and case law means they cannot yet function as stand-alone research tools. Addressing these shortcomings, especially in statutory engagement and case law correctness, will be crucial as AI advances in order to assess its dependability and usefulness in the legal industry.
RECENT DEVELOPMENTS
Legal professionals must take reasonable precautions to protect client information under the Protection of Personal Information Act 4 of 2013 (POPIA), which places stringent requirements on data processing. Before utilizing AI technologies, practitioners should, if necessary, think about anonymizing data to protect personally identifying information while still enabling AI to help with analytical tasks. Before incorporating AI into their operations, the guidelines advise practitioners to seek advice from IT specialists to guarantee compliance with POPIA. Attorneys must protect client information by anonymizing sensitive data before uploading it or by utilizing locally hosted, secure AI technologies. To guarantee that the use of AI does not jeopardize client privacy, adherence to the Protection of Personal Information Act 4 of 2013 (POPIA) is crucial.
SUGGESTION/WAYFORWARD
Regardless of how convincingly an AI tool presents a source, legal practitioners must always verify its authenticity and relevance using reliable databases. Legal practitioners must approach AI-generated content critically, acknowledging that generative AI tools prioritize highly cited cases and may overlook less prominent but legally relevant authorities. Many generative AI tools operate on cloud-based platforms that store and process user inputs, posing a risk of unauthorised data exposure. To prevent mentioning cases that don’t exist or distorting the law, practitioners should study the underlying judgments rather than depending solely on AI-generated summaries. The legal profession can benefit from AI without sacrificing thoroughness and credibility by incorporating these verification stages into routine practice and by implementing firm-wide policies, senior mentorship, and continual training.
CONCLUSION
In conclusion, artificial intelligence offers transformative potential for South African legal practice, improving efficiency, access to justice, and cost-effectiveness. Yet its adoption must be tempered by rigorous professional oversight, ethical responsibility, and critical evaluation. Cases such as Makunga, Parker, and Mavundla demonstrate that courts value AI as a tool but will not excuse negligence or the unverified use of AI-generated content. South African legal practitioners owe a fundamental duty of candour to the court, as enshrined in the Code of Conduct for Legal Practitioners. The judge in Mavundla underscored that courts assume counsel’s cited authorities are real and relevant. Whether caused by negligence, over-reliance on AI, or supervision lapses, presenting fictitious precedents to a court is the direct opposite of that duty
REFERENCE(S):
Case Law
Parker v Forsyth NO and others (Regional Court, Johannesburg, Gauteng) unreported case no 1585/20 (29 June 2023) (Parker) and reported on Law Library South Africa as Parker v Forsyth NO and others ZAGPRD 1, available at https://lawlibrary.org.za/akn/za gp/judgment/zagprd/2023/1/eng@2023-06-29 (last accessed on 30 December 2024).
Journal Articles
DCDT – SA National AI Policy Framework – SA National AI Policy Framework
Book
Russel, S. & Norvig, P. (2021). Artificial Intelligence: A Modern Approach. Third Edition. Prentice Hall Series.
Internet Source
World Bank. (2023). Artificial Intelligence in the Public Sector. https://documents1.worldbank.org/curated/en/746721616045333426/pdf/Artificial Intelligence-in-the-Public-Sector-Summary-Note.pdf
[1] Mavundla-v-MEC-Department-of-Co-Operative-Government-and-Traditional-Affairs-KwaZulu-Natal-and-Others-KZP-unreported-case-no-7940_2024P-8-1-2025.pdf.
[2] Mavundla v MEC 2025: para 50.
[3] Responsible AI Use in South African Legal Practice: A Call for Ethical Guidelines.





