Home » Blog » Legal Liability in Accidents Involving Autonomous Vehicles: Who is Responsible

Legal Liability in Accidents Involving Autonomous Vehicles: Who is Responsible

Authored By: NUR AMIRA FARHANA BINTI ISMAIL

ISLAMIC SCIENCE UNIVERSITY OF MALAYSIA

Abstract 

As artificial intelligence (AI) technology increasingly powers autonomous vehicles (AVs), the  law faces novel challenges in attributing liability for accidents. Traditional criminal negligence  frameworks, built around human foresight and control, struggle to accommodate AI systems  capable of independent decision-making. This article analyzes the concept of “negligence  failures” where classical legal principles of mens rea, foreseeability, and risk-taking break  down. Drawing on comparative models from Singapore, France, and the UK, it highlights how  different jurisdictions have begun reshaping their legal systems to respond to the complexities  of AI-driven harm. 

Introduction

Autonomous vehicles promise transformative changes in transportation, including reduced  traffic accidents and increased mobility. However, they also raise significant legal dilemmas.  Who should be held responsible when an AV causes harm? Traditional criminal law requires  assigning blame based on control, intent, or negligence. But AI systems challenge these  assumptions by acting in unpredictable, sometimes opaque, ways. Scholars have described this  as a “responsibility gap”. A situation in which no human agent can be clearly blamed for the  AI’s actions. This article examines how criminal law can adapt to this evolving landscape. 

Current Legal Framework

In most jurisdictions, criminal liability for harm caused by vehicles rests on human negligence,  the failure to act with reasonable care. Negligence requires certain conditions: a duty of care,  breach of that duty, and harm resulting from that breach. Importantly, the actor must have had  a reasonable ability to foresee and prevent the harm. 

This model breaks down when applied to AVs. These vehicles rely on complex AI systems  using machine learning (ML), which may evolve post-deployment and behave in ways not  foreseeable by developers or users. Legal frameworks in Singapore, France, and the UK are  now attempting to fill this gap through new laws and regulatory roles like the “user-in-charge”  and the Automated Driving System Entity (ADSE).

Legal Issues in AI Driving Accidents

The Epistemic Problem 

Modern AI systems often function as “black boxes.” Their decision-making processes may not  be understandable, even by their creators. As a result, it may be impossible to prove that a  human could have foreseen a specific harmful outcome. This undermines the foreseeability  requirement in negligence. 

The Control Problem 

Criminal liability typically presumes the accused had control over the harmful act. However,  in AVs, once control is transferred to the AI, humans may no longer have the ability to intervene  effectively. Studies show that when humans act as passive supervisors, their ability to react  quickly and appropriately diminishes. This calls into question whether assigning liability to  drivers in such cases is just. 

The Problem of Many Hands 

AI systems are developed by teams of programmers, engineers, and companies. If an AV causes  harm due to a latent flaw in its training data or software, it becomes difficult to pinpoint a single  culpable party. This “problem of many hands” makes it hard to assign criminal responsibility  using traditional mens rea standards. 

Case Study: Comparative Legal Approaches

Singapore 

Singapore has proposed a two-pronged approach through its Penal Code Review Committee  (PCRC) and the Law Reform Committee (LRC). Offence A targets users or developers who act  rashly or negligently in deploying AI. Offence B extends liability to those who fail to take  reasonable steps to prevent foreseeable harm, even without specific awareness. This represents  an effort to capture a broader range of risky behavior without relying solely on intent or  knowledge. 

The concept of the “user-in-charge” introduced in the LRC report recognizes that liability  should rest with the person with the most oversight and ability to intervene. However, the LRC cautions against over-reliance on vague duties and suggests sector-specific regulation, such as  mandatory intervention thresholds. 

France 

France amended its Road Code in 2021 to address criminal liability in autonomous driving.  The law creates a framework where liability shifts from the driver to the vehicle system under  certain conditions, such as when the AV is operating in a certified autonomous mode. The  human is not liable for “dynamic driving offences” during that time, effectively drawing a legal  line between manual and autonomous control. 

United Kingdom 

The UK Law Commissions’ Joint Report on Automated Vehicles introduces the roles of “user in-charge” and the “No-User-in-Charge” (NUIC) operator. It proposes that users-in-charge  should not be liable for traffic violations when the AV is in control, shifting responsibility to  the ADSE — the entity responsible for the AV’s design and safety. This approach blends  practicality with fairness, recognizing that users may not be capable of correcting AV decisions  in real-time. 

Conclusion

The rise of autonomous vehicles demands a rethinking of traditional negligence doctrines. As  this article illustrates, legal systems are experimenting with novel constructs to bridge the  “responsibility gap.” The epistemic and control problems highlight the limitations of assigning  liability to individuals who lack foresight or the ability to act. Meanwhile, the problem of many  hands illustrates the complexity of modern AI development and the limits of traditional  accountability mechanisms. 

Comparative analysis reveals a common theme: the shift from person-based to system-based  liability. By introducing roles such as the “user-in-charge” and the ADSE, lawmakers aim to  ensure fairness while maintaining public trust in AV technologies. Moving forward, a hybrid  legal model combining strict oversight, sector-specific duties, and technological transparency  may be the most viable path for integrating AVs into the legal landscape.

Reference(S):

  1. Alice Giannini & Jonathan Kwik, Negligence Failures and Negligence Fixes: A Comparative Analysis of Criminal Regulation of AI and Autonomous Vehicles, 34 Criminal Law Forum 43–85 (2023). 
  2. Andreas Matthias, “The Responsibility Gap: Ascribing Responsibility for the Actions of Learning Automata” (2004) 6 Ethics and Information Technology.
  3. Law Commission of England and Wales & Scottish Law Commission, Automated Vehicles: Joint Report (Law Com No 404, SLC No 258), 2022.
  4. French Ordinance No. 2021-443, Code de la route (France).
  5. Singapore Academy of Law, Report on Criminal Liability, Robotics and AI Systems, 2021.
  6. Singapore Penal Code Review Committee, Report, 2018.
  7. Ugo Pagallo, “When Morals Ain’t Enough: Robots, Ethics, and the Rules of the Law” (2017) 27 Minds and Machines.
  8. Giannini, A., Kwik, J. Negligence Failures and Negligence Fixes. A Comparative Analysis of Criminal Regulation of AI and Autonomous Vehicles. Crim Law Forum 34, 43–85 (2023). https://doi.org/10.1007/s10609-023-09451-1

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top