Home » Blog » Artificial Intelligence and Legal Liability: Challenges and the Way Forward in Indian Law

Artificial Intelligence and Legal Liability: Challenges and the Way Forward in Indian Law

Authored By: Adarsh Pratap Singh

University of Lucknow

Abstract

The rapid development and deployment of Artificial Intelligence (AI) technologies have transformed critical sectors such as healthcare, finance, governance, transportation, and legal services. AI-driven systems provide significant benefits in efficiency, speed, and decision-making accuracy. However, their growing autonomy also raises serious legal issues, especially regarding accountability and liability when such systems cause harm. Traditional liability frameworks, which rely on human intention, fault, and foreseeability, are often inadequate to address AI’s unique features, including algorithmic opacity, autonomous operation, and self-learning behaviour.

This research article examines the issue of legal liability arising from the use of artificial intelligence within the Indian legal framework. It analyses existing statutory provisions, judicial principles, and relevant international developments to identify gaps in the current approach. The article argues that India presently lacks a specialised and coherent liability regime for AI-related harms and continues to depend on traditional tort, criminal, and product liability doctrines that are not fully equipped to regulate emerging technologies. The study concludes by suggesting legal and policy reforms aimed at ensuring accountability, protecting individual rights, and encouraging responsible and ethical innovation.

Introduction

Artificial Intelligence is no longer confined to theoretical or experimental use; it has become an integral part of everyday decision-making across numerous fields. Technologies such as facial recognition software, automated credit assessment tools, autonomous vehicles, predictive policing systems, and algorithm-based governance mechanisms increasingly influence human lives. In India, government initiatives promoting digital transformation, e-governance, smart infrastructure, and technological innovation have further accelerated the adoption of AI in both public administration and private enterprise.

Despite its benefits, the growing reliance on AI has also resulted in new categories of legal disputes and potential harms. Instances of algorithmic discrimination, flawed automated decisions, data security breaches, and accidents involving autonomous systems raise complex legal questions. Central among these is the question of liability: when an AI system causes harm, who should bear legal responsibility? Whether liability should rest with developers, manufacturers, deployers, data providers, or end users remains unclear under existing legal doctrines.

This research critically evaluates the challenges posed by AI-related liability in India. It assesses the adequacy of the current legal framework and explores the need for reforms that balance technological progress with accountability and legal certainty.

Literature Review and Existing Legal Framework

The issue of AI and legal liability has attracted significant attention in global academic and policy discourse. Scholars differ on the conceptual classification of AI, with some viewing it merely as a sophisticated tool, while others argue that advanced AI systems display a degree of autonomy that warrants reconsideration of traditional legal categories. Proposals such as granting limited legal personality to AI systems have been debated, though many legal theorists caution against such approaches, emphasising that accountability must ultimately remain with human actors.

In India, scholarly discussion has largely focused on ethical AI, data protection, and governance models rather than liability mechanisms. Policy initiatives, particularly those led by NITI Aayog, highlight the transformative potential of AI for economic growth and social welfare. However, these policy documents provide limited guidance on questions of legal responsibility and liability, leaving courts to rely on pre-existing statutes that were not designed with autonomous technologies in mind.

Applicable Legal Provisions in India

India currently does not have a dedicated, comprehensive statute governing Artificial Intelligence. Instead, issues of Al liability, accountability, and regulation are addressed through a fragmented set of existing legal provisions spread across constitutional law, tort law, consumer protection, information technology law, and sector-specific regulations. This fragmented framework, while partially effective, is increasingly inadequate to address the complex and autonomous nature of Al-driven systems.

At the constitutional level, Articles 14 and 21 of the Constitution of India play a foundational role in governing Al-related harms. Article 14, which guarantees equality before the law and protection against arbitrariness, becomes relevant where algorithmic decision-making leads to discriminatory or biased outcomes. Al systems used in areas such as recruitment, credit scoring, predictive policing, or welfare distribution may violate Article 14 if they operate on opaque criteria or embed systemic biases. Article 21, which guarantees the right to life and personal liberty, has been judicially expanded to include the right to privacy, dignity, and informational self-determination. The use of Al systems that infringe privacy, enable mass surveillance, or cause automated deprivation of rights may thus attract constitutional scrutiny.

From a statutory perspective, the Information Technology Act, 2000 is the primary legislation addressing digital technologies. Although the Act does not explicitly regulate Al, provisions relating to data protection, intermediary liability, and cybersecurity are indirectly applicable. Sections dealing with unauthorised access, data breaches, and failure to protect sensitive personal data may impose liability on entities deploying Al systems that mishandle user data. However, the intermediary liability framework under the IT Act is ill-suited for Al systems that actively generate decisions rather than merely host third-party content.

The Consumer Protection Act, 2019 is another significant legal instrument. Its provisions on product liability and unfair trade practices may be invoked where Al-powered products or services cause harm due to defects, misleading claims, or inadequate disclosures. Al-driven services, particularly in fintech, health-tech, and e-commerce, raise complex questions regarding what constitutes a “defect” and whether continuous self-learning systems can be assessed using traditional consumer law standards.

In addition, principles of tort law, particularly negligence and strict liability, continue to govern Al-related harm in the absence of specific legislation. Courts may assess whether developers or deployers exercised reasonable care in designing, training, and monitoring Al systems. However, applying traditional tort principles to autonomous and opaque systems presents challenges in establishing foreseeability, causation, and fault.

Overall, while existing legal provisions provide a tentative framework to address Al-related issues, they remain reactive and piecemeal. The absence of sector-specific Al regulations and clear statutory guidance creates uncertainty for users, developers, and adjudicating authorities alike, underscoring the urgent need for a coherent and future-ready Al regulatory regime in India.

At present, India does not have a dedicated statute governing artificial intelligence. Issues of liability are addressed indirectly through the application of general laws, including:

  1. Law of Torts: Principles such as negligence, strict liability, and vicarious liability may be invoked where AI systems cause harm. However, establishing fault, causation, and standard of care is particularly challenging due to the technical complexity and opacity of AI decision-making processes.
  2. Consumer Protection Act, 2019: This legislation provides remedies against defective products and deficient services, which may extend to AI-based products and services. Nevertheless, the Act does not explicitly address systems that evolve or modify their behaviour after deployment.
  3. Information Technology Act, 2000: The Act regulates issues relating to data protection, cybersecurity, and intermediary liability, all of which may arise in AI-related disputes. Its scope, however, remains limited in addressing harm caused by autonomous decision-making systems.
  4. Indian Penal Code, 1860: Criminal liability under the IPC is based on the presence of mens rea. Since AI systems lack intent or consciousness, attributing criminal liability in cases involving AI-induced harm is highly problematic.

While these statutes provide fragmented remedies, they were not enacted with autonomous and self-learning technologies in mind, resulting in significant regulatory gaps.

Analysis and Discussion: Liability Challenges Posed by AI

  • Attribution of Responsibility

One of the most pressing challenges in AI-related liability is identifying the appropriate party to hold responsible when harm occurs. AI systems typically involve multiple stakeholders, including developers, software engineers, manufacturers, data suppliers, deployers, and end users. This multi-layered structure complicates the attribution of liability under traditional fault-based legal principles.

For instance, in accidents involving autonomous vehicles, responsibility may potentially be shared among several actors. Indian tort law, which primarily relies on individual fault, is ill-equipped to address such distributed responsibility.

  • Autonomy and Foreseeability

Foreseeability plays a crucial role in determining negligence. However, advanced AI systems, particularly those using machine learning techniques, continuously evolve based on new data inputs. This capacity for independent adaptation makes it difficult for developers or operators to anticipate specific outcomes, thereby weakening the application of foreseeability as a legal standard.

  • Absence of Mens Rea

Criminal liability under Indian law requires the existence of a guilty mind. Since AI systems do not possess intention or moral agency, holding them criminally liable is legally untenable. Assigning criminal responsibility to human actors is also challenging where harm arises from complex, automated decision-making processes beyond direct human control.

  • Product Liability and AI Systems

AI-based technologies may be categorised as products or services under consumer protection laws. However, traditional product liability frameworks are based on the assumption that products are static and unchanging. AI systems, by contrast, continue to learn and adapt after deployment, raising unresolved questions regarding defects, updates, and post-market responsibility.

Comparative Perspective

Several jurisdictions have begun developing AI-specific liability frameworks to address the unique risks posed by autonomous and algorithm-driven technologies. The European Union has taken a leading role through the proposed Artificial Intelligence Act and the AI Liability Directive, which together adopt a risk-based regulatory model. Under this approach, AI systems classified as high-risk—such as biometric identification tools, autonomous vehicles, medical diagnostic systems, and AI used in law enforcement—are subject to enhanced compliance obligations and stricter liability standards. Notably, the EU framework seeks to ease the burden of proof on victims by introducing presumptions of causality in certain cases, thereby strengthening access to justice.

In the United States, there is no comprehensive federal legislation specifically addressing AI liability. Instead, courts rely on traditional tort principles, including negligence and product liability, to adjudicate AI-related disputes on a case-by-case basis. While this flexible approach allows courts to adapt existing doctrines to new technologies, it often leads to fragmented and unpredictable outcomes. Recent policy debates in the U.S. have increasingly focused on algorithmic accountability, transparency, and bias, reflecting growing concern over the societal impact of AI systems.

China has adopted a sector-specific regulatory model, introducing detailed rules governing algorithmic recommendation systems, deepfake technologies, and data-driven platforms. These regulations impose direct obligations on technology companies, including requirements related to transparency, user consent, and mechanisms for grievance redressal. Although China’s regulatory approach is more interventionist, it demonstrates the feasibility of proactive governance in addressing AI-related harms.

In contrast, India lacks a structured and comprehensive approach to AI liability. While policy discussions and strategy documents exist, legislative action remains limited. As a result, Indian courts continue to rely on conventional legal principles that were not designed to address the complexities of autonomous and self-learning systems.

Several jurisdictions have begun to recognise the need for AI-specific liability frameworks. The European Union has proposed comprehensive regulatory instruments such as the Artificial Intelligence Act and the AI Liability Directive, adopting a risk-based approach that imposes stricter obligations on high-risk AI systems.

In the United States, courts primarily rely on existing tort and product liability principles, addressing AI-related disputes on a case-by-case basis. China has introduced sector-specific regulations for AI applications, particularly in areas involving large-scale data processing and public impact.

Compared to these developments, India remains at an early stage. While policy discussions are ongoing, concrete legislative measures addressing AI liability have yet to be implemented.

Findings and Observations

  1. Indian law continues to depend on traditional liability doctrines that are not fully suited to addressing AI-specific risks.
  2. There is considerable ambiguity regarding the allocation of responsibility among the various stakeholders involved in AI systems.
  3. Existing criminal liability frameworks are largely incompatible with autonomous and self-learning technologies.
  4. The absence of sector-specific AI regulations has created legal uncertainty for both developers and users.

Conclusion and Recommendations

Artificial Intelligence poses novel and complex challenges to existing legal liability frameworks. Although India has made significant progress in promoting AI-driven innovation, the legal system has not evolved at a comparable pace. Artificial Intelligence presents both immense opportunities and profound challenges for Indian law. While existing statutes provide partial coverage, they are insufficient to address the unique complexities of AI liability. A forward-looking legal framework must balance innovation with accountability, ensuring that AI serves society without undermining rights. India stands at a critical juncture: by proactively addressing AI liability, it can foster trust in technology and position itself as a global leader in responsible AI governance.

This article proposes the following measures:

  1. Enactment of a Dedicated AI Liability Framework: India should introduce comprehensive legislation specifically addressing AI-related harms and clearly defining liability standards.
  2. Risk-Based Regulatory Approach: AI applications posing higher risks to life, safety, and fundamental rights should be subject to enhanced regulatory scrutiny and liability obligations.
  3. Clear Allocation of Responsibility: Legal provisions must clearly define the duties and responsibilities of developers, manufacturers, deployers, and users of AI systems.
  4. Reform of Consumer Protection Laws: Existing consumer protection mechanisms should be updated to account for the evolving and adaptive nature of AI-based products and services
  5. Judicial and Institutional Capacity Building: Continuous training and sensitisation of judges, lawyers, and regulators in AI-related issues is essential for effective adjudication and enforcement.

A balanced, forward-looking, and technology-sensitive legal framework will enable India to ensure accountability, protect individual rights, and foster responsible innovation in the age of artificial intelligence.

Reference(S):

  1. NITI Aayog, National Strategy for Artificial Intelligence, Government of India.
  2. Consumer Protection Act, 2019.
  3. Information Technology Act, 2000.
  4. European Commission, Proposal for an Artificial Intelligence Act.
  5. Scholarly literature on artificial intelligence and legal liability.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top