Home » Blog » THE GHOST IN THE CODE: BRIDGING THE LIABILITY GAP FOR AUTONOMOUSAI SYSTEMS UNDER THE INDIAN TORT REGIME AND INTERNATIONAL BESTPRACTICES

THE GHOST IN THE CODE: BRIDGING THE LIABILITY GAP FOR AUTONOMOUSAI SYSTEMS UNDER THE INDIAN TORT REGIME AND INTERNATIONAL BESTPRACTICES

Authored By: Taha Yasin Mohammad Ragaey Mohammad Helmy Afify

STEM High School for Boys

ABSTRACT

The rapid integration of autonomous Artificial Intelligence (AI) into the socio-economic fabric has outpaced the traditional contours of liability law. As algorithms transition from “deterministic tools” to “autonomous agents,” the foundational legal doctrine of respondeat superior and the “reasonable man” test face an existential crisis. This research explores the “Liability Gap”—a phenomenon where harm is caused by AI systems without a clear, attributable human negligence path. Through a comparative doctrinal lens, this paper scrutinizes the nascent Digital Personal Data Protection Act (2023) of India alongside the European Union’s AI Act (2024). The author argues that existing Indian tort law, rooted in colonial-era Common Law, is ill-equipped to handle the “black box” nature of algorithmic harm. This study discovers that a shift from fault based liability to a “Risk-Based Strict Liability” model is not merely a legal necessity but an ethical imperative. By synthesizing international instruments and domestic precedents, the article proposes a tripartite liability framework involving developers, deployers, and a state-managed “Algorithmic Insurance Fund” to ensure that justice remains accessible in an automated age.

INTRODUCTION

A. Opening Hook

In the history of jurisprudence, the law has always sought a “human heart” to punish or a “human pocket” to tax. However, as we enter the era of Generative AI and autonomous systems, we are witnessing the emergence of a “Ghost in the Code”—an entity that makes decisions, enters contracts, and, unfortunately, causes harm, all without a direct human puppeteer. The central legal question of our decade is no longer just “What is AI?” but rather “Who pays when AI fails?”

B. Background and Context

The evolution of Artificial Intelligence from a predictive tool to an autonomous decision-maker represents the most significant challenge to tort law since the Industrial Revolution. Traditionally, the law of negligence has relied on human agency; a person is liable because they failed to exercise the care expected of a “reasonable man.” But how does one apply the “reasonable man” standard to a neural network that processes ten million variables in a millisecond? In India, where the law of torts remains largely uncodified and dependent on English Common Law precedents, this technological leap creates a profound “Liability Gap.” This gap exists where harm is caused by an AI, but the developer, the user, and the victim are left in a legal vacuum because no specific individual’s “fault” can be proven.

C. Research Objective

This article examines whether existing Indian tortious doctrines—specifically Strict and Absolute Liability—are sufficient to address the complexities of AI-induced harm. Specifically, it compares India’s nascent regulatory landscape with the European Union’s risk-based framework to determine if India requires a dedicated “AI Liability Act.” The central argument is that the current fault-based system is inadequate for “black box” technologies and that a transition toward a codified Strict Liability 2.0 model is required.

D. Significance

This research is timely as India positions itself as a global tech hub. Understanding these doctrinal variations is crucial for advocates, policymakers, and scholars working on digital
justice. As autonomous systems enter healthcare, transport, and finance, the lack of a clear liability regime threatens both innovation and consumer protection.

E. Structure Overview

This article proceeds as follows: Part IV establishes the existing legal framework and literature; Part V provides a critical analysis of the liability gap and the “black box” problem; Part VI offers a comparative perspective with the EU; and Part VII concludes with specific recommendations for reform.

LITERATURE REVIEW / LEGAL FRAMEWORK

A. Constitutional and Foundational Provisions

The bedrock of algorithmic accountability in India is found in Article 21 of the Constitution, which guarantees the Right to Life and Personal Liberty. The Supreme Court in Justice K.S. Puttaswamy (Retd.) v. Union of India (2017) expanded this to include the right to privacy and protection against arbitrary data processing. It is the author’s submission that harm caused by an unexplainable AI system is a direct violation of the right to a “remedy,” an essential facet of Article 21.

B. Statutory Law: The DPDP Act (2023)

India’s primary digital statute is the Digital Personal Data Protection (DPDP) Act, 2023. While the Act is a milestone for data privacy, it is significantly limited in scope regarding “physical or economic harm” caused by AI. Section 8 of the Act mandates that a Data Fiduciary must implement “reasonable security safeguards,” but it fails to define the liability of the AI developer when the algorithm itself produces a biased or harmful outcome that was not a result of a data breach. There is a glaring legislative lacuna between data protection and tortious liability.

C. Case Law: The Legacy of Absolute Liability

Indian jurisprudence is unique due to the doctrine of Absolute Liability established in M.C. Mehta v. Union of India (1987). Unlike the British rule in Rylands v. Fletcher, the Indian Supreme Court held that an enterprise engaged in an inherently dangerous activity has an absolute, non-delegable duty to ensure no harm is caused.
Scholarly Commentary: Professor Matthew Scherer argues that AI can be categorized as an
“inherently dangerous” activity because of its unpredictability. If this logic is applied, the M.C. Mehta precedent could become the most powerful tool for AI regulation in India, as it allows for no “Act of God” or “Third Party” exceptions.

D. International Instruments

The UN’s “Guiding Principles on Business and Human Rights” suggest that states must provide access to an effective remedy. Furthermore, the UNESCO Recommendations on the Ethics of AI (2021) explicitly state that “Liability should always be attributable to a natural or legal person.” These instruments form the international standard which India, as a member of the G20, is pressured to adopt.

ANALYSIS / DISCUSSION

I. The Failure of the “Reasonable Man” Standard

A critical challenge in AI litigation is demonstrating the breach of duty. Under traditional negligence law, the plaintiff must prove that the defendant’s conduct fell below the standard of a “reasonable man.” Critical Analysis: It is the humble submission of the author that the “Reasonable Man” test is a categorical failure in the context of machine learning. If a self-driving car in Delhi decides to hit a pedestrian to save five passengers, its decision is based on a “utility optimization” algorithm, not human “reason.” Using a human benchmark to judge a mathematical process is like using a ruler to measure the temperature; it is the wrong tool for the task. The court’s insistence on finding “human negligence” in a machine’s decision-making process effectively immunizes AI developers from accountability.

II. The “Black Box” Problem and the Evidentiary Barrier

Most modern AI systems, particularly Deep Learning models, operate as “Black Boxes.” Even their creators cannot fully explain why an AI reached a specific conclusion. Legal Position: Section 101 of the Indian Evidence Act, 1872, places the burden of proof on the person who asserts a fact (the victim).

Analysis: This creates an insurmountable barrier. How can a victim of an AI-driven medical misdiagnosis prove “negligence” when the hospital, the developer, and the doctor all claim the AI’s logic is a trade secret or is too complex to interpret? This “Information Asymmetry” shifts the risk of technological failure entirely onto the weakest party—the consumer.

III. Strict Liability vs. Absolute Liability:

The Indian PathWhile common law jurisdictions like the U.S. lean toward “Strict Product Liability,” the author argues that India should leverage its own Absolute Liability regime.
Argument: Strict liability allows a defendant to escape if they can prove they took “all reasonable care.” Absolute liability does not. Given that AI harm is often “unforeseeable” even with reasonable care, only an Absolute Liability framework ensures that the victims are compensated. The enterprise that reaps the profit from the AI must be the one to bear the risk of its “unforeseeable” errors.

IV. The Multi-Party Liability Tension (Developer vs. Deployer)

In an AI ecosystem, there are usually three parties: the Creator (who writes the code), the Deployer (the company that uses the AI), and the User.
Critique: If a bank uses an AI tool from a third-party developer that ends up discriminating against certain applicants, the bank blames the developer, and the developer blames the “data bias” provided by the bank. This “circular finger-pointing” is a hallmark of modern tech litigation. To resolve this, the author proposes a Joint and Several Liability model, similar to environmental law, where the victim can sue any entity in the chain, and the companies can settle the internal blame privately.

V. Addressing Counter-Arguments: The “Innovation Chilling” Effect Critics argue that imposing absolute liability will stifle the AI startup ecosystem in India. They suggest that innovators will flee to “lax” jurisdictions.

Response: This critique overlooks the fact that legal certainty actually encourages investment. Investors are more likely to fund a company that has a clear insurance and liability framework than one operating in a legal “grey zone” where a single lawsuit could lead to infinite, uncalculated damages. A balanced approach—combining strict liability with mandatory insurance—protects both the victim and the innovator.

6. COMPARATIVE PERSPECTIVE

A. The European Union: The EU AI Act (2024)

The EU has pioneered the “Risk-Based Approach.” It categorizes AI into “Unacceptable,” “High,” and “Low” risk. High-risk AI (like those used in critical infrastructure) must undergo rigorous “ex-ante” (before-the-fact) testing.

Lesson for India: The EU approach focuses on prevention. However, the EU’s proposed “AI Liability Directive” also introduces the “Presumption of Causality.” If an AI is found to be non compliant with safety rules, the court presumes the AI caused the harm, shifting the burden of proof onto the company.

B. The United States: Judicial Hesitation

In contrast, the U.S. relies on existing product liability laws. However, U.S. courts have struggled with “Section 230” of the Communications Decency Act, which often immunizes platforms for the content their algorithms promote.

Contrast: India cannot afford the U.S. model. In a country with a massive digital divide, leaving AI liability to the “free market” or “contractual fine print” will lead to systemic exploitation of the digital-illiterate population.

C. Summary Table: Comparative Standing

Feature India (Current)European Union (AI Act)

Proposed Indian Reform

Liability Basis

Burden of Fault-based (Negligence)

Risk-based

(Statutory)Strict Liability 2.0

ProofOn the VictimPresumed (in some cases)

StatuteUncodified Tort /  Shifting to Developer

DPDP ActEU AI Act 2024AI Liability Act (Codified)

7. FINDINGS / OBSERVATIONS

Based on the preceding analysis, the following findings emerge:

1. The Legislative Gap: India lacks any specific civil liability framework for AI. The DPDP Act 2023 is insufficient as it treats AI purely as a data issue, ignoring its physical and economic harm potential.

2. Obsolescence of the Reasonable Man: Traditional negligence standard is incapable of addressing autonomous systems where “foreseeability” is scientifically impossible for humans to gauge.

3. The Black Box Barrier: The current rules of evidence (placing the burden on the victim) create a “Justice Gap” because of the inherent complexity and secrecy of algorithmic code.

4. Absolute Liability Suitability: India’s unique domestic doctrine of “Absolute Liability” is the most culturally and legally appropriate tool to manage high-risk AI, provided it is codified.

5. Comparative Advantage: By adopting a “Presumption of Causality” similar to the EU, India can become a leader in ethical AI, attracting “responsible” global investment.

CONCLUSION & RECOMMENDATIONS

A. Restatement of Objective

This article examined the efficacy of the Indian tort regime in addressing the “Liability Gap” created by autonomous AI systems. Through a comparative analysis, it has demonstrated that a fault-based system is no longer viable for a technology that functions without human agency.

B. Summary of Arguments

The research has shown that the “black box” nature of AI prevents victims from meeting the evidentiary requirements of traditional negligence. Furthermore, the reliance on the “reasonable man” standard is a legal fiction when applied to autonomous code. India’s existing M.C.

Mehta precedent provides a powerful foundation, but it requires legislative codification to be applied to the digital sphere effectively.

Recommendations

For the Legislature:

1. Enact the “Artificial Intelligence Liability Act” (AILA): This should codify a “Risk Based” liability regime, exempting low-risk AI while imposing strict liability on high-risk applications (Health, Transport, Finance).

2. Reverse the Burden of Proof: For high-risk AI, the law should presume the AI was at fault if harm occurs, requiring the developer to prove that the system met all “Explainability” and “Safety” standards.

For the Judiciary:

3. Adopt “Digital Absolute Liability”: Courts should treat high-risk AI deployment as an “inherently dangerous activity” under the M.C. Mehta doctrine.

4. Appoint Technical Assessors: Under the Code of Civil Procedure, courts should utilize independent AI experts to “de-mystify” the black box during trials.
For Legal Practitioners:

5. Focus on “Design Defects”: Advocates should frame AI harm not as “negligence in use” but as a “defect in design or training data,” which is easier to prove under product liability principles.

D. Future Research

Future studies should examine the viability of “Legal Personhood” for AI—whether an AI can be sued directly if it possesses its own assets. Additionally, empirical studies on the impact of liability on startup innovation in the Global South are urgently needed.

E. Closing Statement

As we surrender more of our lives to the “Ghost in the Code,” we must ensure that the law remains the ultimate master. The complexity of an algorithm must never be an excuse for the simplicity of a denial of justice.

REFERENCES / BIBLIOGRAPHY

A. Cases

• Donoghue v. Stevenson, [1932] A.C. 562 (H.L.) [United Kingdom].

• Justice K.S. Puttaswamy (Retd.) v. Union of India, (2017) 10 S.C.C. 1 [India].

• M.C. Mehta v. Union of India, A.I.R. 1987 S.C. 1086 [India].

• Rylands v. Fletcher, (1868) L.R. 3 H.L. 330 [United Kingdom].

• State of Gujarat v. Memon Mahomed, A.I.R. 1967 S.C. 1885 [India].

B. Statutes and Legislation

• Digital Personal Data Protection Act, No. 22 of 2023, India Code.

• European Parliament, Artificial Intelligence Act (EU AI Act), 2024.

• The Indian Evidence Act, No. 1 of 1872, India Code.

• The Constitution of India, 1950.

C. Books

• Ryan Abbott, The Reasonable Robot: Artificial Intelligence and the Law (Cambridge Univ. Press 2020).

• Jacob Turner, Robot Rules: Regulating Artificial Intelligence (Palgrave Macmillan 2019).

• Cass R. Sunstein, One Case at a Time: Judicial Minimalism on the Supreme Court (Harvard Univ. Press 1999).

D. Journal Articles

• Matthew U. Scherer, Regulating Artificial Intelligence Systems: Risks, Challenges, and Strategies, 29 Harv. J.L. & Tech. 353 (2016).

• Niloufer Selvadurai, The Proper Basis for AI Liability, 32(1) Int’l J.L. & Info. Tech. 45 (2024).

• Jacqueline Peel & Hari M. Osofsky, Climate Change Litigation, 7 Ann. Rev. L. & Soc. Sci. 225 (2011) [Reference for Comparative Methodology].

E. Online Sources

• UNESCO, Recommendation on the Ethics of Artificial Intelligence, https://unesdoc.unesco.org/ark:/48223/pf0000381115 (last visited Jan. 22, 2026).

• NITI Aayog, National Strategy for Artificial Intelligence, https://www.niti.gov.in (last visited Jan. 21, 2026).

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top