Authored By: Mohammad Sahil
Aligarh Muslim University
Abstract
In India, the burgeoning AI sector, projected to contribute $957 billion to the economy by 2035, faces significant legal liability challenges amid rapid adoption. Without a dedicated AI law as of August 2025, reliance on existing frameworks like the Consumer Protection Act (CPA) 2019 and Digital Personal Data Protection Act (DPDPA) 2023 exposes gaps in addressing AI’s autonomy, opacity, and harm attribution.
The rapid integration of artificial intelligence (AI) into various sectors has raised profound questions about legal liability when AI systems cause harm. This article explores the boundaries of accountability in AI, examining traditional legal frameworks, emerging challenges posed by AI’s autonomy and opacity, recent developments in legislation and case law, and international regulatory approaches. It argues that while existing liability doctrines like negligence and strict liability provide a foundation, they often fall short in addressing AI-specific issues such as attribution of fault and third-party harms. Drawing on 2025 developments, including federal proposals in the US and the enforcement of the EU AI Act, the article highlights the need for adaptive liability regimes that balance innovation with safety. Ultimately, it concludes that robust liability mechanisms are essential to incentivize responsible AI development, but limits exist due to technological uncertainties and regulatory fragmentation.
Keywords : Artificial Intelligence, Legal Liability, Accountability, AI Regulation, EU AI Act, Negligence, Strict Liability, Emerging Theories, Data Privacy, Risk Management, Consumer Protection Act, DPDPA, Accountability, Product Liability, Ethical AI, NITI Aayog.
Introduction
India stands as a global AI powerhouse, ranking second in AI skill penetration according to the Stanford AI Index Report 2025. With AI expected to add $25–30 billion to healthcare GDP by 2025, incidents like biased algorithms in lending or autonomous vehicle mishaps underscore liability concerns. Traditional laws apply, but AI’s “black box” nature and multi-stakeholder ecosystems strain them. As of mid-2025, no comprehensive AI legislation exists, relying instead on sectorspecific guidelines and advisories. This patchwork approach limits effective redress, prompting calls for a tailored framework amid international influences like the EU AI Act.
Artificial intelligence has transformed industries from healthcare and transportation to finance and entertainment, promising efficiency and innovation. However, as AI systems become more autonomous, incidents of harm—ranging from biased hiring algorithms to fatal autonomous vehicle accidents—have underscored the urgent need to define legal liability. Traditional legal concepts, rooted in human agency, struggle to adapt to machines that learn and decide independently.
The limits of legal liability in AI stem from several factors: the “black box” nature of many AI models, where decision-making processes are opaque; the difficulty in proving foreseeability or causation; and the potential for cascading harms across global supply chains. In 2025, with AI adoption accelerating, policymakers and courts are grappling with these issues amid a patchwork of regulations. For instance, the EU’s AI Act imposes stringent requirements on high-risk AI, while the US debates federal moratoriums on state laws to avoid regulatory fragmentation. This article delves into these dynamics, analyzing current frameworks, challenges, recent cases and legislation, and future pathways to ensure AI serves society without unchecked risks.
Current Legal Frameworks for AI Liability
Legal liability traditionally falls under tort law, contract law, and product liability statutes, which can be applied to AI but with limitations.
Tort Law: Negligence and Strict Liability
Negligence requires proving that a party owed a duty of care, breached it, and caused harm. In AI contexts, this applies to developers who fail to implement adequate safety measures. However, proving breach is challenging when AI behaviors emerge unpredictably from training data. Strict liability, often used for defective products, holds manufacturers accountable regardless of fault if the product is unreasonably dangerous. Under the US Restatement (Third) of Torts, AI software might qualify as a “product,” but courts have been inconsistent, especially for open-source or generative AI.
In Europe, the Product Liability Directive (updated in 2024) extends to digital products, allowing claims for AI-induced damages. Yet, limits arise when AI evolves post-deployment, blurring the line between defect and intended function.
Contract Law and Warranties
Contracts between AI providers and users can allocate liability, but they often include disclaimers limiting responsibility. This shifts burden to end-users, raising equity concerns, especially for consumers lacking bargaining power.
Regulatory Oversight
Agencies like the US Federal Trade Commission (FTC) enforce liability for deceptive AI practices, while sector-specific rules (e.g., FDA for AI in medical devices) impose compliance duties. However, these are reactive, not preventive, and do not cover novel AI risks comprehensively.
Challenges in Applying Liability to AI
Autonomy and Attribution
Unlike traditional tools, AI can act independently, complicating fault attribution. If an AI chatbot provides harmful advice, is the developer liable for all possible outputs? Courts must determine if harms were foreseeable, but machine learning’s probabilistic nature makes this difficult.
Opacity and Explainability
“Black box” AI hinders proving causation or negligence. Victims struggle to access proprietary algorithms, limiting evidence in lawsuits.
Third-Party Harms and Supply Chain Complexity
AI often involves multiple actors—data providers, model trainers, deployers—diluting responsibility. Third-party harms, like biased AI affecting non-users, fall outside market corrections, necessitating liability to internalize costs.
Innovation vs. Accountability Trade-Off
Overly broad liability could stifle innovation, especially for startups. Limits are proposed through safe harbors or caps, but these risk under-deterring large firms.
Current Legal Frameworks and Challenges in India
India’s liability regime for AI draws from tort, contract, and consumer laws. The CPA 2019 introduces product liability for defective goods or services, potentially covering AI systems causing harm through malfunctions or biases. For instance, if an AI diagnostic tool errs, manufacturers or service providers could be liable for compensation without proving negligence, provided the defect is established. However, proving causation in opaque AI models is arduous, as victims lack access to proprietary algorithms.
The Information Technology Act 2000 and IT Rules 2021 address intermediary liability, mandating platforms to label AI-generated content and curb deepfakes. Penalties include fines or service blocking for non-compliance. The DPDPA 2023 bolsters data privacy, imposing duties on AI deployers to prevent misuse of personal data in training, with fines up to ₹250 crore for breaches. Yet, these laws falter on AI-specific issues: autonomy complicates fault attribution— who is responsible if an AI evolves unpredictably post-deployment? Third-party harms, like discriminatory AI in hiring, may evade strict liability if not deemed a “product.”
Sectoral regulations add layers. In finance, RBI guidelines require explainable AI for credit decisions, while IRDAI’s January 2025 sub-committee report urges AI governance in insurance to mitigate liability risks. Healthcare AI falls under the Medical Devices Rules, but enforcement is inconsistent.
Challenges abound: Few judicial precedents exist, with one ongoing case as of March 2025 on AI agent liability. India’s diverse socio-economic landscape amplifies biases in AI trained on skewed data, raising equity concerns unaddressed by current laws. Innovation stifling is a fear; startups argue broad liability could hinder growth in a market where AI adoption is high but resources limited.
Recent Developments and Proposals in. India
2025 has seen policy momentum. NITI Aayog’s National Strategy for AI emphasizes responsible development, with advisories on ethical AI. The proposed AI bill, discussed in May 2025, aims for strict accountability, including liability for harms from misuse or malfunctions, tailored to India’s context. It draws from global best practices but prioritizes local needs like data sovereignty.
Emerging theories include service liability for AI companions, as explored in analyses of films like ‘Her,’ questioning emotional harm redress. Calls for mandatory AI insurance and explainability mandates aim to bridge gaps.
Recent Developments and Cases in Other Countries
2025 has seen significant activity in AI liability, with emerging theories and legislative shifts.
Emerging Theories of Liability
Litigation is expanding across domains
- Privacy : Cases like New York Times v. OpenAI allege unauthorized data scraping for training, invoking copyright and fair use debates. Biometric suits under Illinois’ BIPA, such as against Clearview AI, impose liability for non-consensual data collection, with settlements reaching millions.
- Consumer Rights: AI chatbots face negligent misrepresentation claims, as in Air Canada’s case where a bot’s error bound the company.
- Employment : EEOC settlements address AI hiring bias, with rulings like Mobley v. Workday holding vendors as “agents” under anti-discrimination laws. Monitoring tools raise privacy and labor violations.
- AI Detection and Education : False positives in plagiarism detectors lead to lawsuits for defamation or lost opportunities.
- AI Washing : SEC actions penalize overstated AI claims, with class actions under securities laws doubling in 2024.
These theories highlight AI’s litigation frontier, pushing courts to adapt doctrines.
US Developments
A federal proposal for a ten-year moratorium on state AI regulations aims to prevent fragmentation but could create a vacuum, leaving risks unaddressed. States like Rhode Island are advancing bills to hold developers accountable for AI harms where users are innocent. Arguments for liability emphasize its role in incentivizing safety without prescriptive rules, scaling with revealed risks.
International Perspectives: The EU AI Act
Effective August 2025, the Act classifies AI by risk, mandating transparency for general-purpose models, including detailed training data summaries to mitigate copyright and privacy liabilities. Providers face fines up to 3% of global turnover for non-compliance. New models must comply immediately, existing ones by 2027. Exemptions for R&D and military uses limit scope, but extraterritorial reach influences global standards. Data lineage gaps pose compliance challenges, heightening litigation risks.
Other regions, like China with interim AI measures and Canada proposing the Artificial Intelligence and Data Act, emphasize accountability but vary in enforcement.
Proposals and Future Directions
To address limits, experts advocate:
- Strict Liability for High-Risk AI , Shift burden to developers, encouraging optimal precautions.
- Explainability Mandates : Require auditable AI to facilitate claims.
- Insurance and Funds : Mandatory insurance or victim compensation funds to distribute risks.
- Federal Harmonization: In the US, centralized rules could coordinate with states, avoiding moratorium pitfalls.
- Global Standards : Harmonize via bodies like the OECD to handle cross-border harms.
Conclusion
India’s AI liability limits stem from regulatory fragmentation and AI’s complexities, risking unredressed harms in a high-stakes ecosystem. While CPA and DPDPA provide foundations, a dedicated AI Act is imperative to clarify attribution, enforce transparency, and balance innovation with safety. As India advances in global AI governance via GPAI, 2025 reforms could set precedents, ensuring accountable AI that drives inclusive growth. Without action, liability gaps may erode trust, underscoring the urgency for adaptive laws.
AI’s potential is immense, but its limits in legal liability reveal gaps in accountability that could undermine public trust. Traditional frameworks provide a starting point, but AI’s autonomy, opacity, and scale demand evolution—evident in 2025’s cases on privacy and bias, US moratorium debates, and the EU AI Act’s transparency push. Without adaptive regimes, harms may go unrepressed, stifling ethical innovation. Policymakers must prioritize liability that incentivizes safety while fostering growth, perhaps through hybrid approaches blending regulation and market forces. As AI integrates deeper into society, establishing clear, enforceable limits on liability will be crucial to harnessing its benefits responsibly.
Reference(S):
- Research Gate https://www.researchgate.net/profile/Aayushi Arya/publication/382325557_Limitations_of_AI_Skilling_Programs_in_India_ A_Critical_Analysis/links/6697eece8dca9f441b832900/Limitations-of-AI Skilling-Programs-in-India-A-Critical-Analysis.pdf, accessed 21 August 2025
- Niti Aayog https://www.niti.gov.in/sites/default/files/2023-03/National Strategy-for-Artificial-Intelligence.pdf , accessed 22 August 2025 3. The Hindu https://www.thehindu.com/news/cities/Madurai/artificial intelligence-has-its-limits-people-have-unrealistic-expectations-of-it-says-iit professor/article68461116.ece/amp/, accessed 21 August 2025





