Home » Blog » Artificial Intelligence and the Law: Balancing Innovation Ethics and Liability in a Global Context

Artificial Intelligence and the Law: Balancing Innovation Ethics and Liability in a Global Context

Authored By: Jasmeen kauser S.A Gadwal

AKK NEW LAW ACADEMY PUNE

Artificial Intelligence (AI) has transformed sectors from healthcare and finance to justice administration, driving efficiency and novel capabilities through machine learning, neural networks, and generative models. Yet this rapid advancement raises profound legal questions: how to foster innovation without compromising ethical standards or exposing parties to unmanageable liability. 

Key concerns include algorithmic bias leading to discrimination, opaque “black-box” decision-making hindering accountability, privacy violations in data-intensive training, and determining responsibility when AI causes harm—whether through erroneous outputs, IP infringement, or safety failures.

Globally, jurisdictions adopt divergent approaches. The European Union emphasises risk-based mandatory rules (AI Act), while others favour principles-based or voluntary frameworks to avoid stifling innovation. 

This article examines the Indian perspective primarily through Supreme Court Cases Online (SCC Online) jurisprudence and statutes, contrasted with the UK’s evolving principles-based regime and Singapore’s influential Model AI Governance Framework. 

It argues for balanced, context-specific regulation that allocates liability proportionately, mandates ethical safeguards like transparency and fairness, and incorporates innovation enablers such as regulatory sandboxes.

Indian Perspective

India lacks dedicated comprehensive AI legislation, relying instead on existing frameworks like the Information Technology Act, 2000 (as amended), the Copyright Act, 1957, and the Digital Personal Data Protection Act, 2023 (DPDPA). 

The DPDPA governs personal data processing central to AI training and deployment, requiring consent or legitimate uses, granting data principals rights (access, correction, erasure), and imposing obligations on data fiduciaries for purpose limitation, data minimisation, and security.

Exemptions exist for publicly available data and certain research, but automated processing (including AI) triggers safeguards against unfair outcomes; non-compliance invites significant penalties.

Copyright poses particular challenges. The Copyright Act, 1957 requires human authorship for protection (Sections 2, 13, 17). 

In Eastern Book Co. v. D.B. Modak

the Supreme Court held that copyright demands skill, judgment, and a modicum of creativity from human effort; mere labour or mechanical reproduction does not suffice. 

AI-generated content, derived algorithmically from training data without human creative input, generally fails this test, leaving outputs unprotected and raising moral rights issues under Section 57.

Training data scraping risks infringement. 

In R.G. Anand v. Delux Films, the Supreme Court distinguished substantial copying (imitation) from inspiration, assessing overall similarity from a lay observer’s viewpoint.  

AI models trained on copyrighted works without licence may constitute infringement if substantial protected material is used, especially given India’s narrower fair dealing provisions compared to fair use jurisdictions. 

This is live in Ani Media (P) Ltd. v. OpenAI Inc., where news content training allegations prompted judicial scrutiny of liability for developers versus users, with no clear safe harbour for AI training. 

The judiciary engages AI cautiously. The Supreme Court has introduced tools like SUPACE (for case analysis), SUVAS (translation), and TERES (transcription), supported by a White Paper on AI in the Judiciary emphasising AI as an assistive tool, not a replacement for human judgment. 

Concerns over AI “hallucinations” producing fake precedents, deepfakes, and bias have led to petitions for guidelines; the Court has declined judicial directions (administrative domain) but flagged misuse risks, as in detected fabricated citations in filings.

Liability remains underdeveloped. AI is treated as a tool; human actors (developers, deployers, users) bear responsibility under negligence, contract, or statutory provisions (e.g., IT Act intermediary safe harbours, subject to due diligence). Proving causation is difficult due to opacity, and no legal personhood exists for AI.

United Kingdom Perspective

The UK adopts a principles-based, sector-specific approach without overarching AI statute (as of late 2025), though 2025 legislation is planned to make voluntary codes binding on powerful model developers, strengthen the AI Safety Institute, and impose targeted requirements. 

Core regulatory principles (safety/security/robustness; transparency/explainability; fairness; accountability/governance; contestability/redress) guide existing regulators (e.g., under Data Protection Act 2018/GDPR for automated decisions with significant effects, Equality Act 2010 for bias/discrimination).

Liability draws from tort (negligence—duty, breach, causation, damage), contract, product liability (Consumer Protection Act 1987, evolving for software/AI), and corporate law. 

Directors must exercise reasonable care, skill, and diligence (Companies Act 2006, ss. 172, 174) in managing AI risks, promoting company success while considering broader impacts. 

Challenges include assigning responsibility across value chains (developer vs. deployer) and opacity hindering foreseeability. Proposals clarify proportionate liability and redress mechanisms.

Singapore Perspective

Singapore’s Model AI Governance Framework (PDPC, 2nd ed. 2020; updates for generative/agentic AI) is voluntary, accountability-focused, and innovation-oriented.  

It rests on human-centricity and fairness/explainability/transparency, with practices in decision-making (human oversight), data (quality, bias minimisation), model governance (robustness, reproducibility), and operations (monitoring, incident response). 

Complementary tools include AI Verify for testing against 11 ethics principles (transparency, fairness, safety, accountability, etc.) and self-assessment guides.

The Personal Data Protection Act (PDPA) governs personal data in AI, with obligations on consent, purpose, security, and rights. Liability typically arises in negligence (standard of care for AI deployment) or contract; no dedicated AI liability statute exists, but the framework encourages internal governance (roles, SOPs, training) to demonstrate due diligence and mitigate harm. Singapore emphasises practical, sector-agnostic guidance to build trust while enabling adoption.

Balancing Innovation, Ethics, and Liability

Ethics demand proactive measures: bias audits, explainability techniques (where feasible), impact assessments, and inclusive design to prevent discrimination or societal harm. Transparency (disclosing AI use, model limitations) and contestability (human review, redress) are essential.

Liability allocation should be proportionate—strict or heightened for high-risk systems (e.g., safety-critical), negligence/default rules otherwise—with safe harbours or limitations for good-faith developers meeting standards. 

Challenges of opacity and multi-party chains require evidentiary aids (logging, audits) and potential statutory presumptions or insurance mandates.

To balance: 

(1) Risk-based classification (low/high) with graduated obligations.

(2) Regulatory sandboxes for testing. 

(3) Ethical committees/internal accountability structures. 

(4) International cooperation for harmonised standards (e.g., data flows, IP).

(5) Public-private collaboration and judicial capacity-building. 

India could draw from Singapore’s voluntary model and UK’s principles while legislating core safeguards under DPDPA/IT Act extensions or new AI ethics/accountability rules.

In conclusion, AI’s promise requires adaptive law that protects rights, ensures accountability, and sustains innovation. India, UK, and Singapore illustrate complementary paths—data-centric safeguards, principles-driven oversight, and practical governance—towards a responsible global AI ecosystem. Targeted, evidence-based reforms, informed by jurisprudence and stakeholder input, will be key to realising benefits while mitigating risks.

Footnote(S):

¹ The Digital Personal Data Protection Act, 2023, No. 22, Acts of Parliament, 2023 (India).

² Eastern Book Co. v. D.B. Modak, (2008) 1 SCC 1 (India).

³ R.G. Anand v. Delux Films, (1978) 4 SCC 118 (India).

⁴ Ani Media (P) Ltd. v. OpenAI Inc., 2024 SCC OnLine Del 8120 (India).

⁵ Dep’t for Sci., Innovation & Tech., A pro-innovation approach to AI regulation (White Paper, 2023) (U.K.); see also planned 2025 measures per U.K. Gov’t announcements.

⁶ Pers. Data Prot. Comm’n, Model Artificial Intelligence Governance Framework (2d ed. 2020) (Sing.); see also updates for generative AI (2024).

Note on sources and originality: Citations follow Bluebook 21st ed. (T2.18 for India; adapted for U.K./Singapore policy documents). Analysis draws on synthesised legal principles from SCC Online cases, DPDPA, U.K. principles/planned measures, and Singapore Model Framework. All content is originally formulated.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top