Home » Blog » WHO IS LIABLE WHEN AI GOES WRONG: NAVIGATING THE RISK OF LIABILITY IN THE DIGITAL ERA.

WHO IS LIABLE WHEN AI GOES WRONG: NAVIGATING THE RISK OF LIABILITY IN THE DIGITAL ERA.

Authored By: Ramahana Mpho Fhatuwani

University of Venda

Abstract

The South African law of delict deals with the allocation of liability following harm suffered by any person. With artificial intelligence taking a strong stance in professional settings, including legal, medical, and business workspaces, it becomes difficult to navigate who is liable when AI makes an error that causes harm to an individual. This difficulty is attributable to the lack of proper laws regulating AI and its use within professional settings. This research explores the South African law of delict and helps navigate who carries the risk of liability in this AI generation. It employs a doctrinal legal research methodology, which involves studying and synthesising case law and existing legislation. This research briefly explains how risk allocation works under the South African law of delict, which is important both procedurally and economically.

Keywords: artificial intelligence, South African law of delict, liability, negligence, legal personality, capacity to act, innovation, digitalisation.

1. Introduction

Artificial intelligence (AI) is rapidly taking on a crucial role in South African professional settings, evolving from a speculative tool to a decision-maker, especially in the legal, business, and medical ecosystems.1 This transition has raised several legal risks, most notably the risk of delictual damage arising from the use of AI in professional settings.2 This research addresses the question of who is legally accountable for AI errors in the modern era: the AI’s creator, the user, or the AI itself?

Consider this scenario: an AI-bot gives a user medical advice, which the user relies on for medical relief, only to react negatively to the treatment, suffering severe pain and further harm. Who is liable? Is it the AI’s creator, the user of the AI, or the AI itself? The answer to this question is important because it helps determine who to sue for damages when AI causes harm, especially in high-stakes professional settings.

2. The Fundamental Framework

2.1 The General Rule Pertaining to the Law of Delict

Unlike the United States, South Africa lacks a specific legal framework to address AI-related risks. Instead, South African delict and common law will be used to determine who may be liable for AI errors in professional settings.3 The South African law of delict comprises a set of principles and rules that determine who may sue and who may be sued for damages to one’s person. A common conception is that for a person to be liable, that person must have caused harm to another.4

To establish who is liable for harm caused by AI, the following elements of delict must first be met — this is the general rule:

A. Conduct

The conduct of a person which caused harm to the user. In cases involving AI, this would mean the output generated by AI that led to the harm suffered by the user. In the above example, the conduct was the AI-bot’s advice, which led the user to suffer a severe, painful reaction to the medical treatment. However, who is liable for this conduct? Can we really attribute it to the AI-bot?

B. Wrongfulness

Conduct is wrongful if it causes harm to another person in a manner that contravenes the law. Using the AI-bot example, the issuance of detrimental advice that caused severe harm to the user is contrary to the user’s constitutional right to freedom, security, and dignity, and is therefore wrongful.5 However, one cannot state this with absolute certainty, since the AI-bot is not a person with the ability to distinguish right from wrong.

C. Fault

Fault helps determine whether the conduct of the person to be blamed was intentional or merely negligent. The distinction between intention and negligence is explored below:

i. Intention (Intentional Conduct)

In cases of intentional conduct, the person acted of their own accord and out of a personal motive. A good example of intentional conduct is illustrated in the case of S v Dube,6 where the court found the accused to have intended, decided, and planned to assault another person, who subsequently died as a result of the assault. The court accordingly found the accused guilty of assault with intent to cause grievous bodily harm. (Note: while S v Dube is a criminal law case, it is cited here by analogy to illustrate the concept of intention.)

ii. Negligence (Negligent Conduct)

In cases of negligence, the person acts in a way that is wrong, though not intentionally. However, a reasonable person in their situation would foresee the likelihood of causing harm to others through such conduct. In cases involving AI, the user is usually found liable for harm caused by the AI, not the AI itself. A pertinent example is the case of Mavundla v MEC: Department of Co-Operative Government and Traditional Affairs KwaZulu-Natal and Others,7 which is discussed below.

D. Causation

The foundation of establishing damages and liability is the concept of causation — cause and effect.8 The conduct of the person alleged to be liable must have been the direct cause of the harm suffered by the affected person.9 Using the AI-bot example, the advice rendered by the AI must have been the direct cause of the pain and suffering incurred by the user. This still begs the question: does the AI-bot carry the risk of liability for harm suffered by its users arising from its output?

3. Navigating Who Carries the Risk of Liability

3.1 The Legal Status of AI and Its Capacity to Act

For AI to assume liability for harm suffered by a user due to its output, it must first have an established legal personality and, secondly, the capacity to act. However, in South African law, AI is not legally recognised as a juristic person and therefore lacks legal personality, which automatically extinguishes its capacity to act. Accordingly, since it lacks the capacity to act — a prerequisite for conduct as an element of delict — AI cannot be held liable for any harm arising from its generative output.

Returning to the AI-bot example: the AI-bot cannot be held liable; it is, in fact, immune from delictual liability arising from its use or generative output. This leaves two role players with the potential to bear liability for detrimental AI-generated outputs: the creator/developer/trainer of the AI, and the user.10 AI is, in essence, merely a tool — not a person capable of acting or causing harm in a legally cognisable sense.

3.2 Why the Standard Delict Framework Does Not Apply to AI

The general rule — that the elements of delict must first be satisfied before establishing who bears the risk of liability — does not readily apply to AI. This is because AI lacks the capacity to act, as it lacks legal personality. As a result, AI is incapable of meeting the essential elements of delict: it cannot commit wrongful conduct, and it cannot be found at fault for any harm arising from its use.

3.3 Who, Then, Is Liable When AI Goes Wrong?

In South Africa, recent case law indicates that when AI makes a detrimental error, it is the user who bears liability for the resulting harm.11 In Mavundla v MEC: Department of Co-Operative Government and Traditional Affairs KwaZulu-Natal and Others, the legal counsel for the applicant cited AI-generated authorities in support of their legal argument. As a result, the court was misled, and the court procedure was unnecessarily prolonged by investigations into the matter.

The court found the legal counsel liable for the procedural and ethical harm brought about by the irresponsible use of AI in legal proceedings. The court emphasised that legal counsel owed a duty to the court to make submissions in line with legal and ethical standards. The legal counsel was accordingly ordered to pay costs from their own pocket.

The Mavundla case powerfully illustrates that when a professional — whether a legal practitioner, medical doctor, or business person — uses AI irresponsibly in a professional setting, it is that professional who will be held liable for any resulting harm.

4. Mitigation Strategies: Recommendations

Several strategies are proposed to address the regulatory gaps identified in this article:

  1. Explicitly recognise AI under South African law, assigning it a defined place in the law of delict, either as a legal object or a legal subject.
  2. Introduce provisions requiring mandatory review of all AI-generated works to establish originality and accuracy before reliance.
  3. Establish clear legislative provisions governing how risk passes from the creator to the user of AI once the AI is deployed.
  4. Raise awareness within professional settings about the various risks associated with AI in this digital era. This includes promoting compliance with national and international delictual standards, integrating these standards into local policies, and establishing sound risk assessment and mitigation practices within professional environments.
  5. Following the establishment of risk networks, set out clear remedies — both delictual and procedural — for persons who suffer harm in connection with AI use. These remedies could include potential fines for irresponsible AI use or negligence, internal disciplinary proceedings, and potential civil lawsuits against the user of the AI.

5. Conclusion

This article has explored the question of who bears the risk of delictual liability when AI causes harm in a professional setting. The analysis reveals that, under South African law, AI itself cannot be held liable: it lacks legal personality and, consequently, the capacity to act. Liability therefore falls on either the developer or the user of the AI, with recent case law — most notably Mavundla v MEC — affirming that professionals who deploy AI irresponsibly will be held personally accountable.

With emerging and uncertain AI frameworks, it is important that risk be properly acknowledged and assessed, to ease the process of claiming damages when AI makes a detrimental error. This is not only procedurally significant, but it will also set a standard for responsible AI use across professional settings, whilst promoting innovation and digitalisation in South Africa.

Bibliography

Legislation

Constitution of the Republic of South Africa, 1996.

Case Law

Barnard v Santam Bpk 1999 (1) SA 202 (SCA).

Mavundla v MEC: Department of Co-Operative Government and Traditional Affairs KwaZulu-Natal and Others 2025 (3) SA 534 (KZP).

S v Dube 2023 (1) SACR 513 (MM).

Journal Articles

Khan F, ‘The Impact of Artificial Intelligence on the Law of Delict and Product Liability’ (2024) 45(3) Obiter 674.

Online Sources

Chandler K & Behrendt P, ‘AI liability – who is accountable when artificial intelligence malfunctions?’ (TaylorWessing, 7 January 2025) AI liability – who is accountable when artificial intelligence malfunctions?

Wolson D, ‘2026: Embracing AI execution in South Africa’s business landscape’ (IOL, 24 January 2026) accessed 31 January 2026.

1 Dean Wolson, ‘2026: Embracing AI execution in South Africa’s business landscape’ (IOL, 24 January 2026) accessed 31 January 2026.

2 Franaaz Khan, ‘The Impact of Artificial Intelligence on the Law of Delict and Product Liability’ (2024) 45(3) Obiter 674.

3 Khan (n 2) 676.

4 Khan (n 2) 676–677.

5 Constitution of the Republic of South Africa, 1996.

6 S v Dube 2023 (1) SACR 513 (MM) para [1].

7 Mavundla v MEC: Department of Co-Operative Government and Traditional Affairs KwaZulu-Natal and Others 2025 (3) SA 534 (KZP).

8 Khan (n 2) 677.

9 Barnard v Santam Bpk 1999 (1) SA 202 (SCA).

10 Katie Chandler & Dr Philipp Behrendt, ‘AI liability – who is accountable when artificial intelligence malfunctions?’ (TaylorWessing, 7 January 2025) AI liability – who is accountable when artificial intelligence malfunctions?

11 Mavundla (n 7).

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top