Authored By: Priya Kumari
Dr. Bhimrao Ambedkar Agra University
ABSTRACT
The rapid integration of Artificial Intelligence (AI) into governance, healthcare, transportation, finance, and law enforcement has generated unprecedented legal challenges, particularly in the realm of criminal liability. Indian criminal law, rooted in the Indian Penal Code, 1860, is premised on human agency, intention, and culpability—concepts that do not easily accommodate autonomous or semi-autonomous machines. This article critically examines whether existing criminal law doctrines in India are adequate to address harm caused by AI systems. It analyses the conceptual incompatibility between AI decision making and traditional mens rea requirements, explores potential liability of developers, operators, and corporations, and evaluates whether AI itself can or should be treated as a legal subject. The paper also undertakes a brief comparative analysis with the European Union and the United States to identify emerging global trends. The study finds that India currently lacks a coherent legal framework to address AI-related crimes and risks regulatory paralysis if reforms are delayed. The article concludes by proposing doctrinal adaptations, statutory intervention, and a risk-based regulatory approach to ensure accountability while fostering innovation.
- INTRODUCTION
Artificial Intelligence (AI) is no longer a futuristic concept; it is an operational reality influencing decision-making across sectors. From predictive policing and facial recognition to autonomous vehicles and algorithmic credit scoring, AI systems increasingly act with minimal human intervention. While these technologies promise efficiency and innovation, they also create new forms of risk and harm. When an AI system causes injury, death, discrimination, or financial loss, a critical legal question arises: who should be held criminally liable?
Indian criminal law is fundamentally anthropocentric. Liability under the Indian Penal Code, 1860 (IPC) is premised on voluntary conduct accompanied by a guilty mind (mens rea). However, AI systems operate through algorithms, machine learning, and data-driven decision-making, often producing outcomes that were neither intended nor foreseeable by their creators or users. This challenges the foundational principles of criminal jurisprudence.
The objective of this article is to examine whether existing Indian criminal law can meaningfully respond to AI-generated harm, identify doctrinal and regulatory gaps, and suggest reforms to ensure accountability without stifling technological progress.
- LEGAL FRAMEWORK AND LITERATURE REVIEW
2.1 Criminal Liability under Indian Law
Indian criminal law requires two essential elements:
- Actus reus (a guilty act), and
- Mens rea (a guilty mind).
The Supreme Court of India has consistently emphasized the centrality of mens rea in criminal liability, except in strict liability offences. In Nathulal v. State of M.P., the Court held that criminal intent is a core requirement unless expressly excluded by statute.
AI systems, however, lack consciousness, intention, or moral agency. They operate based on probabilistic models and training data. Consequently, attributing mens rea to AI becomes legally incoherent under current doctrine.
2.2 Corporate Criminal Liability
Indian jurisprudence recognizes corporate criminal liability. In Standard Chartered Bank v. Directorate of Enforcement, the Supreme Court held that corporations can be prosecuted for offences involving mens rea. This principle opens a potential pathway to attribute liability for AI-caused harm to corporate entities deploying or controlling such systems.
However, AI systems complicate attribution further because harm may arise from opaque algorithms, third-party data, or self-learning processes beyond direct human control.
2.3 Scholarly Perspectives
Legal scholars globally are divided on AI liability. Some argue for treating AI as a mere tool, placing liability on humans behind it. Others propose granting AI limited legal personality. Indian scholarship remains nascent, largely focusing on ethical concerns rather than enforceable criminal accountability.
- ANALYSIS: AI AND THE PROBLEM OF CRIMINAL LIABILITY
3.1 Can AI Commit a Crime?
Under Indian law, only a “person” can commit an offence. Section 11 of the IPC includes corporations and associations but does not contemplate non-sentient machines. Granting AI legal personhood would raise profound philosophical and constitutional issues, including punishment, deterrence, and moral blameworthiness.
Moreover, criminal punishment—imprisonment or fine—cannot meaningfully apply to AI. Therefore, recognizing AI as a criminally liable entity is neither practical nor desirable under current legal theory.
3.2 Human Liability Models
a) Developer Liability
Developers may be liable if harm results from negligent design, biased training data, or failure to incorporate safeguards. However, imposing criminal liability requires proof of intention or knowledge, which may be difficult when harm arises from unforeseen algorithmic behavior.
b) Operator or User Liability
Operators deploying AI systems in real-world contexts may be held liable under negligence or recklessness standards. For instance, deploying faulty facial recognition software in policing could lead to wrongful arrests, engaging Articles 14 and 21 of the Constitution.
c) Corporate Liability
Corporations are best positioned to bear responsibility due to their control over AI deployment, profit incentives, and capacity to implement compliance mechanisms. A corporate liability model aligns with risk distribution and deterrence principles.
3.3 Strict Liability as a Possible Solution
Indian law already recognizes strict liability in areas such as environmental law (M.C. Mehta v. Union of India). A similar framework could apply to high-risk AI applications, eliminating the need to prove mens rea while ensuring victim compensation and accountability.
- CONSTITUTIONAL DIMENSIONS
AI-driven decision-making directly impacts fundamental rights. In Justice K.S. Puttaswamy v. Union of India, the Supreme Court recognized privacy as a fundamental right. Algorithmic surveillance, data profiling, and automated decision-making threaten privacy, dignity, and equality.
If AI systems are used by the State, constitutional accountability cannot be avoided by attributing harm to “technology.” The State remains responsible for rights violations under Article 12, reinforcing the need for clear liability standards.
- COMPARATIVE PERSPECTIVE
5.1 European Union
The EU Artificial Intelligence Act adopts a risk-based regulatory approach, categorizing AI systems into unacceptable, high, and low-risk categories. High-risk AI systems are subject to strict compliance obligations, transparency, and accountability mechanisms.
5.2 United States
The U.S. follows a sectoral and tort-based approach, focusing on negligence and product liability rather than criminal sanctions. Criminal liability remains exceptional.
5.3 Lessons for India
India can adopt a hybrid approach—combining EU-style preventive regulation with Indian constitutional safeguards and selective strict liability for high-risk AI.
- FINDINGS AND OBSERVATIONS
Indian criminal law is ill-equipped to address AI-generated harm due to its reliance on human intent.
Granting AI legal personhood is impractical and normatively unsound. 3. Corporate and operator-based liability models offer the most workable solutions. 4. Constitutional accountability becomes crucial when AI is deployed by the State. 5. India urgently needs statutory clarity to avoid regulatory uncertainty.
7. CONCLUSION AND RECOMMENDATIONS
Artificial Intelligence challenges the foundational assumptions of criminal law but does not render accountability impossible. Rather than forcing AI into outdated doctrinal frameworks, Indian law must evolve pragmatically.
Recommendations:
- Enact a dedicated AI regulatory statute with criminal and civil liability provisions. • Introduce strict liability for high-risk AI applications.
- Mandate algorithmic transparency and auditability.
- Strengthen corporate compliance obligations.
- Ensure constitutional oversight of State-deployed AI systems.
A balanced legal framework can protect individual rights while enabling India to harness AI responsibly and ethically.
REFERENCES (Bluebook Style)
- Indian Penal Code, 1860.
- Standard Chartered Bank v. Directorate of Enforcement, (2005) 4 SCC 530.
- 3. Justice K.S. Puttaswamy v. Union of India, (2017) 10 SCC 1.
- M.C. Mehta v. Union of India, (1987) 1 SCC 395.
- European Commission, Proposal for a Regulation on Artificial Intelligence (2021).
- 6. Surden, Harry, Machine Learning and Law, 89 Wash. L. Rev. 87 (2014).





