Home » Blog » CAN MACHINES COMMIT CRIMES? RETHINKING MENS  REA IN THE AGE OF AI

CAN MACHINES COMMIT CRIMES? RETHINKING MENS  REA IN THE AGE OF AI

Authored By: Shreeya Vaish

S.S. Khanna Girls' Degree College, University of Allahabad

Abstract

The increasing integration of artificial intelligence (AI) into everyday life poses significant legal  and ethical challenges, particularly in criminal law. Traditional legal doctrines are built on human  intent, yet machines now perform actions with autonomous decision-making capabilities. This  article explores whether AI systems can be held criminally liable under existing Indian law, and if  not, how the legal framework must adapt. It examines the relevance of mens rea, judicial  interpretations of intent, and the limitations of current laws such as the Indian Penal Code and the  Information Technology Act. Drawing from comparative legal approaches and recent  developments, the article argues for a reconfiguration of liability principles, distinguishing  between tool, agent, and hybrid models of responsibility. In doing so, it calls for a regulatory  framework that upholds accountability without collapsing under the weight of technological  determinism.

Introduction: The Legal Vacuum in Machine Accountability

Artificial Intelligence (AI) is no longer confined to science fiction. From autonomous vehicles to  facial recognition systems, AI-driven technologies are embedded in our financial markets,  healthcare decisions, law enforcement, and military operations. As these systems make decisions  that affect real-world outcomes, questions arise about accountability, especially when harm results  from an AI’s action. If a self-driving car kills a pedestrian or an algorithm flags an innocent  individual as a suspect, who should bear criminal liability? Can a machine “intend” to commit a  crime? The dilemma shakes the foundations of criminal jurisprudence, particularly the notion of  mens rea—the mental element that defines culpability1.

The integration of AI has also introduced a form of technological opacity—commonly referred to  as the “black box” problem—where even developers may not fully understand how an AI system  arrives at a particular decision.2 This becomes even more concerning in sectors such as law  enforcement, where AI-based surveillance, facial recognition, and predictive policing can result in  harmful consequences for individuals without any clearly identifiable wrongdoer3. The stakes are  even higher in a democratic setup like India, where law and policy must ensure that technological  advances do not undermine civil liberties or basic rights.

With over 800 million internet users and a rapidly digitising economy, India’s legal ecosystem  must now grapple with the realities of machine autonomy, accountability, and the urgent need for  safeguards4. The growing reliance on AI-driven systems across public and private sectors  necessitates a re-examination of criminal law doctrines that were designed for human wrongdoers.  This article attempts to explore whether our existing legal principles are equipped to assign  responsibility when it is no longer clear who—or what—committed the wrongful act.

The Indian legal framework, like most jurisdictions, is deeply anthropocentric. It presumes human  agency, emotion, and moral reasoning. But AI systems, especially those powered by machine  learning, operate on data inputs, probabilistic outputs, and neural networks rather than conscious  choice5. The result is a growing mismatch between legal theory and technological practice, leaving  courts and lawmakers struggling to assign blame in an AI-mediated world.

Legal Framework 

Mens rea, or the “guilty mind,” is a cornerstone of criminal liability. Under the Indian Penal Code,  1860 (IPC), most offences require a mental element such as intention, knowledge, or recklessness.  Sections 299 and 300 of the IPC6, for instance, distinguish between culpable homicide and murder based on the perpetrator’s mental state. The Supreme Court of India has consistently upheld the  importance of mens rea, ruling in cases like State of Maharashtra v. M.H. George (1965) that penal  statutes must be interpreted in light of the defendant’s state of mind.7

The IPC does not recognise non-human actors as subjects of criminal liability. Section 118defines  “person” to include “any company or association or body of persons,” which has been used to  extend criminal liability to corporations in certain cases. In Standard Chartered Bank v. Directorate  of Enforcement (2005)9, the Supreme Court held that companies can be prosecuted even if they  cannot be imprisoned. However, this rationale cannot be mechanically extended to AI, which lacks  legal personhood and corporate status.

The Information Technology Act, 2000, India’s primary legislation on digital offences, also fails  to address autonomous decision-making. Provisions under Sections 66 and 67 target human misuse  of digital platforms but are silent on harms caused directly by algorithms or learning systems.  There is no statutory recognition of AI as an actor in the chain of criminal causation.10

Furthermore, Indian criminal jurisprudence has traditionally relied on physical presence or direct  involvement to establish culpability. However, AI systems operate in distributed environments  where harm may originate from autonomous learning or predictive modeling, creating gaps in  assigning direct causality. This becomes particularly problematic when AI systems generate  decisions that deviate from their training parameters—raising questions about foreseeability and  proximate cause.11

Additionally, the current liability structure does not consider the concept of ‘delegated agency’ in  the context of machines. Unlike employees or agents who act on behalf of an organization and are  governed by doctrines such as respondeat superior, AI systems lack consciousness and legal  identity, yet their actions may still be a product of embedded instructions or machine learning outcomes. The absence of codified norms addressing such delegation of digital agency reinforces  the limitations of the existing legal framework12.

Globally, legal theorists have begun to explore the notion of extending criminal accountability  indirectly through accessory liability or constructive knowledge doctrines. However, India has yet  to engage seriously in this jurisprudential debate.13 The need of the hour is a legislative approach  that recognises the distinctiveness of AI conduct and adjusts culpability doctrines accordingly.  Without this, the framework risks being too rigid to accommodate emerging technologies or too  vague to ensure accountability.

Judicial Interpretation 

Indian courts have not yet ruled on AI-related criminal liability, but comparative jurisprudence  offers some insights. In the UK, the Serious Crime Act 2007 extends liability to individuals who  assist or encourage offences14, raising the question of whether designers or operators of harmful  AI could be prosecuted under such provisions. In the EU, discussions around the creation of  “electronic personhood” for AI agents were initiated by the European Parliament in 2017 but  ultimately rejected, reflecting skepticism over granting legal agency to machines15.

In the Indian context, the judiciary has shown caution in expanding liability without explicit  legislative mandate. In Justice K.S. Puttaswamy v. Union of India (2017), the Supreme Court  emphasised the importance of privacy and autonomy in the digital age but stopped short of framing  AI accountability principles16. Without judicial guidance or legislative clarity, courts remain  hesitant to attribute fault to AI systems directly.

However, Indian courts have acknowledged the evolving technological landscape in related  contexts. In Shreya Singhal v. Union of India (2015), the Supreme Court underscored the necessity  of protecting fundamental rights like speech and privacy in the face of emerging digital technologies.17 Although the judgment focused on Section 66A of the IT Act, it signalled judicial  awareness of the need to balance technological evolution with constitutional safeguards.

Further, in K.S. Puttaswamy (Retd.) v. Union of India (2018)18 concerning the Aadhaar project,  the Court recognised the implications of large-scale data processing and surveillance, stressing the  importance of proportionality and accountability when technological systems interface with  individual rights. These interpretations reflect a judicial willingness to engage with digital harms,  even if AI-specific rulings remain absent.

Given the increasing deployment of AI in law enforcement, healthcare, and fintech, judicial  interpretation will eventually be called upon to address issues of causation, foreseeability, and  delegated decision-making. Indian courts may look to doctrines like vicarious liability,  constructive knowledge, and the precautionary principle to anchor accountability frameworks in  the absence of statutory provisions specific to AI.

Critical Analysis 

AI, by its very design, challenges the traditional framework of criminal law. A machine does not  form intentions, feel emotions, or foresee outcomes in a human sense. Its “decisions” are the  product of algorithms, not volition.19 This complicates the application of mens rea to autonomous  systems. If an AI-powered drone causes wrongful death during a surveillance operation, attributing  intention becomes legally and philosophically problematic.

Scholars have proposed three primary models for dealing with AI and criminal liability: the tool  model, the agent model, and the hybrid model. The tool model views AI as an extension of the  human user, holding designers, programmers, or operators liable for any criminal outcome.20 This  is analogous to liability for misuse of weapons or animals21.

The agent model treats AI as a quasi-legal person, capable of limited autonomy and thus deserving  of distinct accountability. This model is controversial because it implies recognition of some form  of machine agency. The hybrid model suggests shared liability, where both human actors and AI  systems bear differentiated responsibilities.

For India, the tool model appears most consistent with existing legal doctrines. Holding developers  or users liable under negligence or vicarious liability principles would allow for accountability  without reworking the foundational idea of mens rea22. However, this model fails when AI evolves  beyond its programming or learns new behaviours autonomously.

Recent Developments 

A landmark international development has been the European Union’s Artificial Intelligence Act,  which was provisionally agreed upon in 202323. This regulatory framework aims to classify AI  systems based on risk—ranging from minimal to unacceptable—and imposes legal obligations on  developers, users, and providers, especially for high-risk AI applications. While it does not address  criminal liability directly, it lays a foundation for state accountability and strict compliance  procedures. The Act encourages transparency, human oversight, and auditability of automated  decisions, principles which can inform India’s nascent regulatory stance.

In the United States, although federal regulation of AI remains fragmented, state-led initiatives  such as the California Consumer Privacy Act (CCPA)24 and the emerging Algorithmic  Accountability Act 25reflect a growing concern about the opaque use of AI, particularly in law  enforcement and employment. The Federal Trade Commission (FTC) has also released guidance 

Agreement.

warning companies against the use of biased or discriminatory algorithms, signalling potential  legal consequences for harms caused by AI systems.

Within India, the Ministry of Electronics and Information Technology (MeitY) released a roadmap  for Responsible AI26, which emphasises ethical design, data privacy, and user trust. NITI Aayog’s  National Strategy for Artificial Intelligence (NSAI), published in 2018 and updated subsequently,  outlines five priority sectors for AI development, but does not offer a robust legal framework for  accountability27. Meanwhile, the recently enacted Digital Personal Data Protection Act, 2023,28 addresses certain aspects of data governance but is silent on autonomous decision-making or  criminal liability.

In addition to governmental initiatives, civil society groups and legal scholars have begun  advocating for algorithmic transparency, particularly in criminal justice technologies like  predictive policing and facial recognition, which are already being piloted by police departments  in cities such as Hyderabad and Delhi29. These trends underscore the urgency for a comprehensive  AI legislation that not only promotes innovation but also addresses the legal ambiguities  surrounding harm, intent, and accountability.

Suggestions

To effectively bridge the regulatory vacuum surrounding AI and criminal liability, India must  undertake a multi-pronged approach grounded in legislative foresight, institutional reform, and  ethical design. First, the Indian Parliament should consider enacting a dedicated law addressing AI  accountability, which outlines the rights and obligations of AI developers, operators, and users30.  This law must explicitly define key terms such as algorithmic harm, autonomous decision-making,  and digital agency to avoid interpretative ambiguity.

Second, a strict liability regime could be introduced for high-risk AI applications, such as  autonomous vehicles or facial recognition systems used in law enforcement31. This would shift the  burden of proof from the harmed individual to the deploying entity, ensuring a more equitable  standard of justice. Drawing inspiration from environmental law, where polluters are held liable  regardless of intent, this model could prioritise harm prevention over subjective culpability.32

Third, it is imperative to institutionalise transparency through legally mandated algorithmic impact  assessments. These assessments should evaluate the potential social, economic, and legal  consequences of deploying AI systems and must be made publicly accessible. Independent  regulatory bodies with technical expertise should be empowered to audit these assessments and  penalise non-compliance.

Fourth, data protection mechanisms must evolve in tandem with AI regulation. While the DPDP  Act lays the groundwork for consent and data minimisation, it lacks robust provisions on  algorithmic bias, profiling, and discrimination33. Amendments should introduce safeguards against  opaque data processing, especially in automated decision-making contexts.

Fifth, the judiciary must be equipped to handle AI-related disputes. Judicial training programs on  emerging technologies, establishment of specialised benches, and the incorporation of expert  testimony in trials can enhance judicial capacity34. The legal fraternity must also engage in  interdisciplinary research and dialogue to update traditional doctrines like mens rea, causation, and  foreseeability in the context of intelligent machines.

Finally, the regulatory architecture should foster ethical innovation by incentivising human-centric  AI design. Public-private collaborations, ethical review boards, and digital literacy programs can  ensure that AI deployment aligns with democratic values and public interest. A rights-based  approach—rather than a purely utilitarian or economic one—must guide India’s journey toward  technologically informed criminal justice35.

Conclusion

India finds itself at the threshold of a technological revolution that demands a commensurate  transformation in its legal frameworks. As artificial intelligence continues to evolve and permeate  every aspect of society, our criminal justice system must shift from anthropocentric presumptions  to a model that contemplates autonomous and algorithmic actions. While machines cannot possess  intent in the human sense, the outcomes they produce can be deeply consequential. This paradox  necessitates a legal architecture that balances innovation with accountability, automation with  justice.

The current legal infrastructure, although rooted in well-established doctrines of mens rea and  causation, falls short of addressing the complexities introduced by machine learning systems.  Judicial caution and constitutional values have so far ensured a degree of protection, but a future oriented legislative response is now overdue. By adopting robust statutory definitions, embracing  strict liability models where appropriate, and equipping regulatory and judicial institutions with  technological competence, India can build a framework that is not only reactive but proactive.

Ultimately, the challenge is not whether AI can commit a crime, but whether our legal system can  keep pace with the evolving nature of agency and responsibility. The law must continue to serve  as both a guardrail and a guidepost, ensuring that technological progress does not come at the cost  of human rights and democratic integrity. A nuanced, principle-driven approach to AI and criminal  liability will ensure that India remains not just a digital leader, but a just and inclusive one.

Reference(S):

Books

  1. Gabriel Hallevy, When Robots Kill: Artificial Intelligence under Criminal Law (2013).
  2. Gabriel Hallevy, I, Robot—I, Criminal: When Science Fiction Becomes Reality, 4 Akron Intell. Prop. J. 171 (2010).
  3. Samir Chopra & Laurence F. White, A Legal Theory for Autonomous Artificial Agents (2011).
  4. Luciano Floridi et al., AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations, 26 Minds & Mach. 689 (2018).

Journals / Academic Papers

  1. Andreas Matthias, The Responsibility Gap: Ascribing Responsibility for the Actions of Learning Automata, 6 Ethics & Info. Tech. 175 (2004).
  2. Sandra Wachter et al., Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation, 7 Int’l Data Priv. L. 76 (2017).
  3. Ryan Calo, Artificial Intelligence Policy: A Primer and Roadmap, 51 U.C. Davis L. Rev. 399 (2017).

Case Laws (India)

  1. State of Maharashtra v. M.H. George, (1965) 1 SCR 123.
  2. Standard Chartered Bank v. Directorate of Enforcement, (2005) 4 SCC 530. 3. Justice K.S. Puttaswamy v. Union of India, (2017) 10 SCC 1.
  3. K.S. Puttaswamy (Retd.) v. Union of India, (2019) 1 SCC 1.
  4. Shreya Singhal v. Union of India, (2015) 5 SCC 1.
  5. Indian Council for Enviro-Legal Action v. Union of India, (1996) 3 SCC 212.

Official Government & Institutional Websites

  1. Ministry of Electronics and Information Technology (MeitY), National Strategy for Responsible AI (2021), meity.gov.in.
  2. NITI Aayog, National Strategy for Artificial Intelligence #AIForAll (2018), niti.gov.in.
  3. European Commission, Proposal for a Regulation Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act), COM (2021) 206 final.
  4. European Parliament, Resolution on Civil Law Rules on Robotics, 2015/2103(INL).
  5. Federal Trade Commission, Aiming for Truth, Fairness, and Equity in Your Company’s Use of AI, (Apr. 19, 2021), ftc.gov.

News & Civil Society Reports

  1. Apar Gupta, Why India Needs an Algorithmic Accountability Framework, Internet Freedom Foundation (Oct. 20, 2023), internetfreedom.in.
  2. Brookings Institution, Congress Moves Toward Algorithmic Accountability (Mar. 2022), brookings.edu.

1 See generally Gabriel Hallevy, When Robots Kill: Artificial Intelligence under Criminal Law 7 (2013); Andreas  Matthias, The Responsibility Gap: Ascribing Responsibility for the Actions of Learning Automata, 6 Ethics & Info.  Tech. 175 (2004).

2Jenna Burrell, How the Machine ‘Thinks’: Understanding Opacity in Machine Learning Algorithms, 3 Big Data &  Soc’y 1, 3 (2016); Brent Mittelstadt et al., The Ethics of Algorithms: Mapping the Debate, 3 Big Data & Soc’y 1, 4  (2016).

3 Rashida Richardson, Jason M. Schultz & Kate Crawford, Dirty Data, Bad Predictions: How Civil Rights Violations  Impact Police Data, Predictive Policing Systems, and Justice, 94 N.Y.U. L. Rev. Online 15, 20–21 (2019).

4 Press Information Bureau, Ministry of Electronics & IT, Digital India Programme (2023),  https://pib.gov.in/PressReleseDetail.aspx?PRID=1909306.

5 Ryan Calo, Artificial Intelligence Policy: A Primer and Roadmap, 51 U.C. Davis L. Rev. 399, 407–08 (2017).

6Indian Penal Code, §§ 299–300 (1860).

7 State of Maharashtra v. M.H. George, (1965) 1 SCR 123 (India).

8Indian Penal Code, § 11 (1860).

9 Standard Chartered Bank v. Directorate of Enforcement, (2005) 4 SCC 530 (India).

10 Information Technology Act, 2000, §§ 66–67.

11 See generally Gabriel Hallevy, Liability for Crimes Involving Artificial Intelligence Systems, 4 J.L. & Tech. 475  (2010).

12 Samir Chopra & Laurence F. White, A Legal Theory for Autonomous Artificial Agents 122–24 (2011).

13 Andreas Matthias, The Responsibility Gap: Ascribing Responsibility for the Actions of Learning Automata, 6 Ethics  & Info. Tech. 175, 177–78 (2004).

14 Serious Crime Act 2007, c. 27, § 44 (U.K.)

15 European Parliament Resolution of 16 February 2017 with Recommendations to the Commission on Civil Law  Rules on Robotics, 2015/2103(INL), ¶ 59–61.

16 Justice K.S. Puttaswamy v. Union of India, (2017) 10 SCC 1 (India).

17 Shreya Singhal v. Union of India, (2015) 5 SCC 1 (India).

18 K.S. Puttaswamy (Retd.) v. Union of India, (2019) 1 SCC 1 (India).

19 Ryan Calo, Artificial Intelligence Policy: A Primer and Roadmap, 51 U.C. Davis L. Rev. 399, 407–08 (2017).

20 Gabriel Hallevy, I, Robot—I, Criminal: When Science Fiction Becomes Reality, 4 Akron Intell. Prop. J. 171, 185– 92 (2010).

21 Id. at 186–87.

22 Andreas Matthias, The Responsibility Gap: Ascribing Responsibility for the Actions of Learning Automata, 6 Ethics  & Info. Tech. 175, 179–80 (2004).

23 Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on  Artificial Intelligence (Artificial Intelligence Act), COM (2021) 206 final (Apr. 21, 2021); see also Council of the EU,  Press Release, Artificial Intelligence Act: Council and European Parliament Strike a Deal (Dec. 9, 2023),  https://www.consilium.europa.eu/en/press/press-releases/2023/12/09/artificial-intelligence-act-provisional

24 California Consumer Privacy Act of 2018, Cal. Civ. Code §§ 1798.100–1798.199 (West 2023). 25 Algorithmic Accountability Act of 2022, S.3572, 117th Cong. (2022) (proposed); see also Brookings Inst., Congress  Moves Toward Algorithmic Accountability (Mar. 2022), https://www.brookings.edu/articles/congress-moves-toward algorithmic-accountability.

26 Ministry of Electronics and Information Technology (MeitY), National Strategy for Responsible AI (2021),  https://www.meity.gov.in/writereaddata/files/India%20AI%20Strategy.pdf.

27 NITI Aayog, National Strategy for Artificial Intelligence #AIForAll (2018),  https://niti.gov.in/sites/default/files/2022-06/NationalStrategy-for-AI_0.pdf.

28 Digital Personal Data Protection Act, No. 22 of 2023, Acts of Parliament, 2023 (India).

29 Apar Gupta, Why India Needs an Algorithmic Accountability Framework, Internet Freedom Foundation (Oct. 20,  2023), https://internetfreedom.in/why-india-needs-an-algorithmic-accountability-framework.

30 Ryan Calo, Artificial Intelligence Policy: A Primer and Roadmap, 51 U.C. Davis L. Rev. 399, 408–09 (2017).

31 Gabriel Hallevy, When Robots Kill: Artificial Intelligence under Criminal Law 98–100 (2013). 32 See Indian Council for Enviro-Legal Action v. Union of India, (1996) 3 SCC 212 (India) (affirming strict liability  in environmental harm).

33 Digital Personal Data Protection Act, No. 22 of 2023, Acts of Parliament, 2023 (India).

34 S. Muralidhar, Law, Technology, and the Judiciary: The Need for Judicial Education, (2018) (available at National  Judicial Academy archives).

35 Luciano Floridi et al., AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles,  and Recommendations, 26 Minds & Mach. 689 (2018).

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top