Authored By: MARTHALA JOSHIKA REDDY
SASTRA DEEMED UNIVERSITY
Introduction
In Mumbai, an autonomous taxi hit a pedestrian as a result of an algorithm navigation malfunction caused by the taxi company’s Noida design team and deployed by a Delhi-based corporation. Who could be prosecuted under Indian law (Bharatiya Nyaya Sanhita, 2023) for the actions of this machine? During the 2018 Uber self-driving car fatality in Arizona, a human safety driver and operator was charged with negligent homicide, not any legal culpability for the autonomous vehicle. As India continues its rollout of artificial intelligence into areas like predictive policing through the Crime and Criminal Tracking Network System, electronic courts, and autonomous vehicle pilots, this example raises important considerations about the anthropocentric basis of our criminal law system. The BNS provisions, §2(n), indicate that voluntary conduct is necessary to establish culpability (intention, knowledge, mens rea) but these constructs do not apply to a computer algorithm that performs a pattern-recognition function.
This paper will argue that machines cannot have mens rea, therefore, the concept of criminal liability as it relates to autonomous vehicles should remain based in an understanding of the moral grounds of retribution, while creating more precise, human-attributed methods of assigning liability, due to failures or tortious conduct. To support these arguments, I provide doctrinal analysis of BNS provisions, the Supreme Court of India’s decisions and applications of comparative international law, current liability schemes and reforms. This article will focus on the absence of conscious volition in machines as it relates to the imposition of indirect criminal sanction by law (i.e., AI systems); and whether/how criminal law must change in the future.
Main Body
Understanding AI in Criminal Context
Narrow AI (which includes facial recognition that matches input with a database) and theoretical general AI both utilize Machine Learning for the statistical detection of patterns in large amounts of data. Deep Neural Networks produce an output that cannot be predicted, even by the creator, which creates a legal fiction of autonomous decision making. Traditional software operates on fixed rules with known outcomes; however, AI’s output produces a “black box” where inputs to AI can provide no identifiable reasoning for the outcome. As such, the inability to trace causation within output from AI makes it philosophically impossible and evidentially, under BNS §101 requirement of proving knowledge beyond a reasonable doubt, unfeasible to demonstrate a defendant’s liability.
Criminal liability is dependent on a human being (i.e. agency) acting with an intent to commit a crime and controlling his/her behavior while doing so. Bhavesh Jayanti Lakhani v. State of Maharashtra established a standard for “conscious moral wrongdoing” as mens rea, thereby excluding the impact of silicon determinism from establishing mens rea. Even though the output of AI may look like reasoning produced by a human being, Searle’s Chinese Room thought experiment demonstrates that performing actions based on manipulating syntax (patterns) does not provide semantic understanding and/or true agency. Therefore, with both actus reus (i.e. external damage as a result of an automobile accident) existing and mens rea cognitive-moral foundation no longer present, a void of accountability will exist that tort law cannot provide adequate deterrence. The increasing cost and pace of technology will create pressure on BNS’ 19th century technological framework (architecture) to either adapt or become obsolete.
Mens Rea: Anthropocentric Exclusivity Under BNS
The different levels of mens rea – intention, knowledge, recklessness, and negligence – each imply that an individual can think morally and rationally. The BNS keeps the same standards for punishment as the IPC by requiring that any punishment given to a person must be for something that they deserved. This structure is similar in logic to the requirements of agency established in R. v. Prince. The courts in India will not recognize any form of strict liability: only public welfare cases can be subject to a no-fault approach. The principles of attribution through the identification theory will still apply to a corporation as it will hold a corporation vicariously responsible for the mental states of its officers.
Under the Bhopal Gas Tragedy case (M.C. Mehta v. Union of India), the principle of absolute liability was established applying to hazardous operations. Even in this case, there must be a chain of causation between the use of an AI model and human decision-making that deploys the AI technology. The changing parameters of autonomous AI disrupt the causal linkages of deployment/oversight and harm caused by a programmer’s decision.
The analogy drawn between human directors (who possess collective mens rea) and algorithms (which do not) are flawed: human beings acting as directors of a corporation have the capacity for moral judgment, while algorithms lack both that capability and a basis for moral censure. To base culpability on proxy metrics (features such as training data quality and optimization goal) creates a disconnect between moral culpability and blameworthiness; instead, they simply become a result of engineering specifications. Based upon the counters in the BNS rules regarding the burden of proof, the proof required to support a culpable action will not exist for a disembodied AI.
Can Machines Bear Mens Rea? Direct Liability’s Bankruptcy
Based on functionalist critiques of progressiveness, direct AI culpability fails. Proponents mistake normalization agency for behavioral agency, arguing AI’s complex learning is the same as human intention; this disregards the 3 rationales behind punishment – deterrence, retribution, and rehabilitation; as insentient machines don’t respond to deterrence; cannot be punished for committing no suffering; and cannot rehabilitate due to lacking the capacity to create goals.
Using AI to enforce punishment will only create a joke as imprisonment of server farms generates neither reflection or Redemption.
Legally, law professor Ugo Pagallo’s proposed frameworks for Machine Law fail because they are inconsistent with mankind’s goal to use mancunkind as the basis for criminal law; this is evident in the case of Lawrence’s Artificial Personhood Law. In the State of Maharashtra v. Meyer Hans George case in India, the AI’s cognitive empty void disqualifies it from being held criminally liable as the judge acquitted unwitting smugglers.
The US/UK courts have consistently held that all AI-generated harms require a negligent person or entity to have caused until we see otherwise; and the EU has adopted the AI Act which prioritizes risking governance over criminalization. The proposed framework of AI as a separate and distinct legal system is philosophically and experientially void of any support for retribution and will therefore create a regulatory environment that favours jurisdictional arbitrage for AI operators to avoid all consequences for any crime committed.
Alternative Liability Models & Legislative Reforms
Vicarious Liability: Continue applying the “directing mind” standard for AI “custodians” to include AI with increased responsibilities (negligent programming, testing, and use) as exhibiting recklessness. Example is Uber v. State of Arizona – an example of how this will work until we have truly autonomous machines.
Corporate Criminal Liability: Under the identification theory, corporations should be held liable for systematic failures resulting from lack of oversight. Accountability should be based upon risk-utility analysis, and punishment should restore the deterrent effect of fines, suspensions from doing business, and disqualification from serving on a corporate board of directors.
Strict Liability: The Bhopal Disaster should define permanent absolute liability with strict liability exceptions for unique situations which are ultrahigh risk (e.g., autonomous weapons, AI apps for hospital diagnostic purposes) to prevent over-deterring research AI.
India must create a national AI Liability Act that requires the creation of logs documenting the reason behind the explanation of AI decisions. AI must create human-in-the-loop vetoes for lethal actions. There is rebuttable evidence of an assumption of fault unless otherwise documented; judge academies should provide AI literacy training; codified forensic code analysis will be the standard. India’s regulatory delay will result in business capital leaving to Singapore or the EU; the hybrid criminal-civil-administrative regulatory framework will support balancing the accountability-innovation axis. Implement mandatory audits of. CCTNS police predictive policing and use tax incentives to reward ethical applications of AI. Laws need to humanize AI before the incident and not afterward.
Conclusion
Under the current BNS, mens rea is exclusive to people; AI generative output lacks intent, consciousness, volition, and moral culpability all characteristics required for culpable intent and agency. The use of pragmatic hybrids that expand (by vices) liability to custodians, assign corporate risk responsibility, and establish multiple or tiered strict liability provide workable deterrent mechanisms while preserving all doctrinal integrity and reflect best practices from across the globe.
It is urgent that a comprehensive AI Statute provide for risk classification similar to the EU approach, require appropriate levels of transparency, and grant appropriate judicial authority. The criminal justice system currently finds itself facing its own Turing Test: it must adapt to techno-agency with the preservation of the human-centric moral foundation. Failure to evolve will result in continued impunity; through appropriate calibrated reform, continued accountability will exist. Although future quantum AI developments will exacerbate these debates, the priority today is to bridge the gap between an unbridgeable gap in intent and agency through human-centric tools. Justice requires no less.
Reference(S):
- Bharatiya Nyaya Sanhita, No. 45, Acts of Parliament, 2023, § 2(n), § 101.
- Bhavesh Jayanti Lakhani v. State of Maharashtra, (2009) 9 S.C.C. 551.
- M.C. Mehta v. Union of India (Bhopal Gas Leak Case), (1987) 1 S.C.C. 395.
- R v. Prince, (1875) 2 C.C.R. 154 (U.K.).
- State v. Vasquez, No. CR20180227 (Ariz. Super. Ct. 2019) (Uber autonomous vehicle crash).
- Ugo Pagallo, The Laws of Robots: Crimes, Contracts, and Torts (Springer 2013).
- Andrew Ashworth & Jeremy Horder, Principles of Criminal Law (9th ed. 2022).
- John R. Searle, Minds, Brains, and Programs, 3 Behav. & Brain Sci. 417 (1980).
- European Commission, Proposal for a Regulation Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act), COM (2021) 206 final
- Harry Surden, Artificial Intelligence and Law: An Overview, 35 Ga. St. U. L. Rev. 1305 (2019).





