Authored By: Nompilo Ngcobo
- INTRODUCTION
Artificial Intelligence commonly known as AI, refers to computer systems that mimic human intelligence by performing tasks at a rapid rate to make informed decisions. A more sophisticated definition of AI was formulated by Gennatas and Chen, who define AI as “[t]echnology that allows humans to build intelligent machines.”[1] This article argues that while Artificial Intelligence offers significant benefits for the South African legal system, its integration must be carefully regulated to protect constitutional rights, maintain ethical practice and ensure human oversight in legal decision making. What can be said is that the South African government has not yet formally introduced AI functions in the legal system by means of sanctions, legislature or bills, however, the emerging phenomenon suggests that the use of AI can be used in conjunction to human expertise to assist in legal decision-making. The article concludes with a recommendation that AI should be used correctly and in conjunction to achieve accurate results.
- INTRODUCING ARTIFICIAL INTELLIGENCE AND ITS LEGISLATION IN SOUTH AFRICA
As of the year 2026, it is quite evident that the use of AI has played a huge role in the advancement of technology, which has also impacted the legal fraternity and how it integrates with the advancement of legal research. Information is now readily available at our fingertips which then directly impacts the right to privacy in terms of section 14 of the Constitution.[2] It states that everyone has the right to privacy, which includes the right not to have the privacy of their communication infringed. In South Africa, although there is no specific AI-only law that currently exists, several existing legislation govern AI as follows:
The Protection of Personal Information Act 4 of 2013 (POPI) – which applies to the data privacy and protection principles applicable to AI technologies that process personal information. The Act prevents AI companies from simply using personal information without a lawful basis for processing, as set out in the Act.
Consumer Protections Act 68 of 2008 (CPA) – Section 22-23 of the Act ensures that consumers receive clear and understandable information. The CPA in respect of the processing of personal information is used in conjunction to POPI. Where AI is involved in service or product, suppliers must ensure that automated processes are explained in plain language.
Cybercrimes Act 10 of 2020 – where the Act criminalises online activities such as deleting, altering or damaging data. The Act ensures that AI cannot be used as a shield for unlawful activities such as hacking and/or fraud.
Further, the Department of Communications and Digital Technologies (DCDT) drafted a National AI Policy Framework which was released in 2024, with the intention of implementing in 2025/6, to act as a guideline for future AI legislation.[3] The expected timeline for the draft National AI Policy is said to be published in March 2026, with full implementations and enforcement of the regulations anticipated between 2027 and 2028.[4] This raises a crucial question: is the current constitutional and legislative framework, particularly POPIA, sufficient to protect individuals from the potential harms of AI-generated content in legal proceedings?
- CONSTITUTIONAL ENGAGEMENT IN SOUTH AFRICAN AI LAWS
AI is a reality that needs to be regulated immediately, not when it is in its mature stage. When the general public think of AI, they envision the future, however, it is already in use. The constitution stands that it is the supreme guide for current and future regulation, explicitly mandating that all technologies must uphold human dignity, equality, privacy and fairness. AI systems therefore must avoid the violation of human rights, biasness, ambiguity and discrimination. These AI systems must comply with POPIA’s provisions to protect individuals from unfair practices. The application of AI must also comply with PAJA (Promotion of Administrative Justice Acts) to ensure decisions are lawful, reasonable and procedurally fair.[5] Under PAJA, should a government department use AI to make decisions, the use of AI must be authorized by legislation. Further, PAJA’s requirement of reasonableness mandates that any decision informed by AI must be rationally connected to the available evidence and the purpose of the empowering provision. Lastly, where procedural fairness is a requirement and AI is involved in a decision, individuals should be presented with an opportunity to be notified of decisions that affect them.
In Mavundla v MEC: Department of Cooperative Government and Traditional Affairs KwaZulu-Natal and Others[6] the facts of the case demonstrate the consequences of legal practitioners submitting AI-generated citations in their findings before the court. This infringement by legal professionals makes a mockery of the legal system and weighs heavily on the notion that AI may, if not used correctly, lead to severe consequences. The court criticized that the submissions presented were flawed and unprofessional, ultimately dismissing Mavundla’s application for leave to appeal. The judges’ decision simply engages legal professionals regarding the unethical use of AI-tools in legal research. It shows why thorough checks remain essential, regardless of the source of information. Simply put, AI tools can “hallucinate” case law, meaning it can create case law that is in fact fictitious.
This then questions the duty of the constitution on how to control AI-generated research, and whether or not it is sufficient to present before a court of law. One should ask whether POPIA as the current existing legislation, is enough to protect an individual from potential harm against the rule of law where AI is involved. Rule 57.1 of the Code of Conduct for all Legal Practitioners, Candidate Legal Practitioners and Juristic Entities (the Code of Conduct) states: ‘A legal practitioner shall take all reasonable steps to avoid, directly or indirectly, misleading a court or a tribunal on any matter of fact or question of law. In particular, a legal practitioner shall not mislead a court or a tribunal in respect of what is in papers before the court or tribunal, including any transcript of evidence.’[7]
It should serve as a warning that individuals’ rights may be infringed by AI-based research and that they should pay extra attention especially where AI tools are used in legal and/or administrative decision-making. In essence, the coexistence of AI and human expertise illustrates the importance of closely examining information before allowing it to influence official binding decisions.
In Northbound Processing (Pty) Ltd v South African Diamond and Precious Metals Regulator and Others[8], the case explicitly highlights the risks of citing fictitious AI-generated authorities created by AI tools. In summary, the court ordered the Regulator to release the license to Northbound due to unlawful ethics and presenting non-existent cases by means of AI-generated searches. While trying to maintain the standard of the legal profession, the judge further referred counsel to the LPC (Legal Practice Counsel) for citing non-existent case law as a consequence to this action. This decision confirms the risk of endangering the livelihood of individuals by improperly using AI tools. The courts together with the Constitution continuously stress that legal professionals have an ethical duty not to mislead the courts whether by intention or negligence. The lack of knowledge about AI risks does not excuse a breach of this duty.[9] The case is one of the first cases in South Africa to deal directly with the misuse of AI in legal proceedings.
South Africa has reached the point whereby the Constitution should implement AI related rules of law immediately. It should not be that our courts are not bound by any legislation that promotes the protection of AI laws, seeing the rapid rate at which professionals are utilizing this technological system for legal research purposes.
The cases therefore demonstrate not just a professional ethics breach, but a tangible threat to the constitutional right to just administrative action as per section 33, as the decisions were initially based on procedurally flawed and factually incorrect ‘evidence’ generated by AI.
- SOUTH AFRICAN LEGAL INSTITUTIONS AND ITS VIEWS ON AI – LAWS
- The LPC (Legal Practice Council)
“The Legal Practice Council is a national, statutory body established in terms of section 4 of the Legal Practices Act No 28 of 2014.”[10]
Institutions such as the LPC are emphasizing verification to combat the ‘hallucinations’ as seen in cases like Mavundla v MEC. The LPC is formulating a policy on:
- Accountability: where legal professionals remain fully accountable for any AI-generated content.
- Ethical constraints: where AI should not be used to create, rely upon, be biased or provide discriminatory information.
- Data Security: where legal professionals must ensure that AI-tools do not breach client confidentiality.
- Future Regulation: The LPC is working towards integrating AI competence into continuing professional development (CPD) requirements by 2026.
- LAW SOCIETY OF SOUTH AFRICA (LSSA)
The Law Society of South Africa (LSSA), which represents the interests of attorneys, has also raised concerns regarding the misuse of AI in legal practice.
In essence, the LSSA vigorously warns legal professionals of the unethical and improper use of AI-tools that may potentially lead to disciplinary action, suspension or being struck off the roll.
Both institutions converge on the principle that AI is a tool, not a substitute and that professionals remain ultimately responsible for the use of AI tools and systems.
- HOW CAN AI POSITIVELY IMPACT THE SOUTH AFRICAN LEGAL SYSTEM?
University of South Africa (UNISA), a distance learning institution often faces challenges relating to access to academic resources. Often times, assignments require information that needs to be accessed via case books, law journals and textbooks that are either out of stock, unavailable or out of reach for students living in rural areas. The introduction of AI plays a significant role in ensuring that the gap between access to information and cost reduction is bridged especially for destitute students, by means of performing research from their homes and receiving it in minutes.
In comparison, the European Union has taken a more proactive regulatory approach through the European Union Artificial Intelligence Act, which classifies AI systems according to risk levels and imposes strict obligations on high-risk systems. This approach demonstrates how early regulatory intervention can balance technological innovation with the protection of fundamental rights. South Africa may benefit from adopting a similar framework when developing its national AI Policy.
It can be said that AI has the potential to modernize the South African legal system by increasing efficiency, reduce costs and improve access to legal services. When used responsibly, it can support the broader constitutional principles of fairness, transparency and accountability in the administration of justice. Further, by making legal information more affordable and accessible, AI tools can democratize basic legal knowledge, improving access to justice. However, as the EU’s AI Act demonstrates, realising these benefits requires a proactive regulatory approach that classifies AI systems as the EU has done.
POSITIVE IMPACTS AND ADVANTAGES OF AI FOR LEGAL PROFESSIONALS
- Enhanced accuracy: If used correctly, AI can enhance legal analysis and reduce the risk of human error leading professionals to more precise results while avoiding overlooking critical information.
- Improved access to justice: by making information more affordable and accessible. AI tools assist with legal research and analyzing legal documents allowing people to have basic knowledge of their problem.
THE DISADVANTAGES OF AI FOR LEGAL PROFESSIONALS
- Regulatory uncertainty: legal institutions are yet to implement and publish binding sanctions and legislations that govern the use of AI, leaving legal practitioners with uncertainty about disciplinary actions relating to AI-related harm.
CONCLUSION
The future of AI in the South African legal systems has great potential. It therefore should be noted that the immediate priority for South Africa should not be a vast, all-encompassing AI statute, but rather the finalization and implementation of the LPC’s ethical guidelines, coupled with the integration of mandatory AI literacy training into CPD requirements. This targeted approach would directly address the most immediate threat, professional negligence and ethical breaches by lawyers – while the longer-term National AI Policy framework is developed to address broader systematic issues. This two-way strategy would ensure that the integration of AI into South Africa’s legal system is guided by professional accountability from the outset, thereby safeguarding constitutional rights at its core. What once seemed like a profession rooted in human intellect and rules, is now evolving through technological norms.
BIBLIOGRAPHY
Legislation
The Protection of Personal Information Act 4 of 2013 (POPIA)
Consumer Protection Act 68 of 2008 (CPA)
Cybercrimes Act 10 of 2020
Rule 57.1 of the Code of Conduct for all Legal Practitioners, Candidate Legal Practitioners and Juristic Entities (the Code of Conduct)
Section 14 of the Constitution of South Africa
Section 33 of the Constitution
Promotion of Administrative Justice Acts 3 of 2000 (PAJA)
Section 4 of the Legal Practices Act No 28 of 2014
European Union Artificial Intelligence Act
Case law
Mavundla v MEC: Department of Cooperative Government and Traditional Affairs KwaZulu-Natal and Others (7940/2024P) [2025] ZAKZPHC 2(8 January 2025) (Leave to appeal)
Northbound Processing (Pty) Ltd v South African Diamond and Precious Metals Regulator and Others (2025-072038) [2025] ZAGPJHC 538 (30June 2025)
Secondary Sources
Gennatas and Chen “Artificial Intelligence in Medicine: Past, Present and Future” Academic Press 2020
Fluxmans, AI Regulations in South Africa: “Legal gaps and progress”, 2023, accessed 12 March 2026 https://www.fluxmans.com
Fasken, AI Regulations in South Africa: “A step in the right direction”, 2023, accessed 12 March 2026 https://www.fasken.com
South African Legal Information Institute, accessed 14 March 2026 https://www.saflii.org.za
South African Law Library, accessed 13 March 2026 https://www.lawlibrary.org.za
University of Johannesburg: “A Study on the Adoption of AI” 2022, accessed 13 March 2026 https://ujcontent.uj.ac.za
South African National Artificial Intelligence Policy Framework
[1] Efstathios D Gennatas and Jonathan H Chen “Artificial Intelligence in Medicine: Past, Present and Future” in Lei Xing and Maryellen Giger and Min (eds) Artificial Intelligence in Medicine: Technical Basis and Clinical Application (Academic press 2020)
[2] Section 14 of the Constitution
[3]AI Regulations in South Africa: A step in the right direction, A legal scholar article 2023, https://www.fasken.com
[4]AI regulations in South Africa: Legal gaps & progress 2024, A legal scholar article, https://www.fluxmans.com
[5] Section 33 of the Constitution
[6] Full citation: Mavundla v MEC: Department of Co-operative Government and Traditional Affairs KwaZulu-Natal and Others (7040/2024P)[2025] ZAKPHC 2 (8 January 2025) (Leave to appeal)
[7]https://lawlibrary.org.za Mavundla v MEC: Department of Co-operative Government and Traditional Affairs KwaZulu-Natal and Others, par [37]
[8]Full citation: Northbound Processing (Pty) Ltd v South African Diamond and Precious Metals Regulator and Others (2025-072038)[2025] ZAGPJHC 538 (30 June 2025)
[9]https://saflii.org.za Article by Mota Makore 2024





