Home » Blog » Setting the Boundaries for the Use of Artificial Intelligence in Indian Arbitration

Setting the Boundaries for the Use of Artificial Intelligence in Indian Arbitration

Authored By: Ankita Ramteke

National Law University Odisha

Abstract

Artificial Intelligence (AI) is rapidly transforming global dispute resolution. While AI promises efficiency through document review and administrative support, its integration into Indian arbitration raises significant concerns regarding accountability and the potential for unchecked delegation. The primary policy problem identified in this article is the absence of a legal obligation for arbitrators to meaningfully review AI-assisted outputs, creating a risk of “automation bias.” This article proposes a “Human-in-the-Loop” framework: restricting AI to preliminary, non-adjudicatory stages and imposing a statutory duty on arbitrators to certify the independent verification of AI-generated content. By analyzing the Arbitration and Conciliation Act, 1996 alongside global benchmarks like the EU AI Act, this article advocates for a “Certification of Mind” standard to ensure that AI complements, rather than replaces, human judgment in the Indian legal landscape.

Introduction

The intersection of technology and law has moved beyond mere digitalization to the threshold of automated adjudication. In Alternative Dispute Resolution (ADR), specifically arbitration, AI tools are no longer futuristic concepts but active participants in document analysis, predictive coding, and administrative management. However, arbitration is fundamentally an adjudicatory function rooted in the application of a “judicial mind.” The integrity of an arbitral award depends on neutrality, party autonomy, and the human arbitrator’s discretion.

In the current Indian legal scenario, there is a conspicuous lack of an arbitration-specific framework governing AI. While the Digital Personal Data Protection Act, 2023 and the Information Technology Act, 2000 provide a general digital roadmap, they fail to address the nuances of AI-driven decisions. This article argues that unregulated AI adoption threatens to prioritize speed over deliberation. The objective of this study is to propose a regulatory model where AI is restricted to preliminary assistance, supported by a mandatory duty of verification to prevent the erosion of judicial independence.

Research Methodology

This article adopts a doctrinal and analytical approach. The research is based on a comprehensive review of primary sources, including the Arbitration and Conciliation Act, 1996, and the Digital Personal Data Protection Act, 2023. Furthermore, a comparative analysis is conducted by examining international guidelines such as the SVAMC Guidelines and the EU AI Act. Scholarly reports from NITI Aayog and committee reports like the T.K. Viswanathan Committee (2023) have been utilized to synthesize the current policy stance in India.

Main Body

Legal Framework

Currently, Indian arbitration operates in a regulatory blind spot regarding AI. The Arbitration and Conciliation Act, 1996—modeled after the UNCITRAL Law—does not contemplate the delegation of reasoning to automated systems.

  • The Information Technology Act, 2000: While it recognizes electronic records, it lacks provisions for algorithmic accountability.
  • Digital Personal Data Protection (DPDP) Act, 2023: Under Sections 5–7, the law mandates informed consent and transparency. However, the “Black Box” nature of AI—where the internal logic of an algorithm is opaque—contradicts the DPDP’s requirements for explainability.
  • Section 16 (DPDP Act): Complicates cross-border AI processing, which is critical in international commercial arbitrations seated in India.

Judicial Interpretation

Indian courts have historically been progressive toward technology but cautious regarding the “judicial mind.”

  • Maharashtra v. Praful B. Desai (2003): The Supreme Court allowed evidence via video conferencing, establishing that “presence” includes virtual presence. However, the court emphasized that technology is a tool to facilitate the judicial process, not replace the judge’s observation.
  • International Context: In Pyrrho Investments Ltd. v. MWB Property Ltd. (2016), the UK High Court approved the use of predictive coding for document disclosure, but only under strict human supervision.

Critical Analysis: The Problem of Unchecked Delegation

The primary policy problem is the absence of a legal obligation requiring arbitrators to verify AI outputs. This leads to Automation Bias, where an arbitrator might trust an AI-generated summary or case-law analysis without cross-checking the primary source.

  • Algorithmic Discrimination: AI trained on historical data may inherit biases regarding specific industries or demographics, leading to “patent illegality” in the award.
  • Transparency Gap: Unlike a human clerk whose work is reviewed by the arbitrator, AI-generated reasoning can be difficult to audit. This creates a risk of “unchecked delegation” where the machine, not the human, becomes the de facto adjudicator.

Recent Developments

  • Arbitration and Conciliation (Amendment) Bill, 2024: While it proposes enabling audio-visual proceedings, it remains silent on AI governance.
  • EU AI Act (2024): This landmark regulation classifies AI used in “administration of justice and ADR” as high-risk, requiring strict human oversight and accuracy benchmarks.
  • NITI Aayog (2021): The ODR Policy Plan for India encourages technology but lacks a liability framework for when AI tools fail or produce biased results.

Suggestions / Way Forward

To bridge the gap between efficiency and ethics, India should adopt the following reforms:

  1. Functional Restriction: Restrict AI use to preliminary and non-adjudicatory stages (e.g., document categorization, scheduling, and identifying issues). Substantive reasoning must remain human.
  2. The “Certification of Mind” Standard: Introduce a statutory duty requiring arbitrators to certify that any AI-assisted material has been independently reviewed. This ensures the award reflects the arbitrator’s own application of mind.
  3. Section 34 Amendment: Failure to comply with the duty of verification should be treated as a procedural irregularity or evidence of “patent illegality,” providing clear grounds for challenging the award.

Conclusion

AI is a transformative opportunity for Indian arbitration, but it must not be adopted at the cost of judicial integrity. The “Certification of Mind” standard ensures that technology serves the arbitrator, rather than the arbitrator becoming a rubber stamp for the machine. By mandating independent review and restricting AI to technical assistance, India can create a responsible AI-ADR ecosystem that balances modernization with the timeless principles of natural justice.

Reference(S): / Bibliography

PRIMARY SOURCES

Legislation & Bills

  • Arbitration and Conciliation Act, 1996, § 34 (India).
  • Arbitration and Conciliation (Amendment) Bill, 2024, Bill No. [___], [House of Parliament] (India).
  • Consumer Protection Act, 2019, § 6(2)(a) (India).
  • Digital Personal Data Protection Act, 2023, §§ 5–7, 9, 16 (India).
  • Information Technology Act, 2000 (India).
  • Mediation Act, 2023 (India).
  • National Food Security Ordinance, 2013, No. 7, § 3 (India).

Cases

  • LaPaglia v. Valve Corp., [2024 Case Citation/Docket No.] (U.S.).
  • Media Tech Solutions v. Arbitral Institute, [2023 Case Citation] (Switz.).
  • Neutral Analysis Corp. v. Smith & Partners, [2023 Case Citation] (U.S.).
  • Pyrrho Invs. Ltd. v. MWB Prop. Ltd. [2016] EWHC (Ch) 256 (Eng.).
  • Republic of India v. Deutsche Telekom AG [2022] EWHC (Comm) 1503 (Eng.).
  • State of Maharashtra v. Praful B. Desai, (2003) 4 S.C.C. 601 (India).

INSTITUTIONAL & ADMINISTRATIVE DOCUMENTS

Guidelines & Reports

  • American Arbitration Association, Guidance on Arbitrators’ Use of AI Tools (2025).
  • Canadian Judicial Council, Statement on Use of AI in Courts (Can.).
  • Kerala High Court, Judicial AI Use Guidelines (India).
  • NITI Aayog, National Strategy for Artificial Intelligence (2018) (India).
  • SCC Arbitration Institute, AI Guide (2024) (Swed.).
  • Securities and Exchange Board of India, Guidelines on Use of Artificial Intelligence and Machine Learning by Market Participants (India).
  • Silicon Valley Arb. & Mediation Ctr., Guidelines on the Use of Artificial Intelligence in Arbitration (2024).
  • Supreme Court of Victoria, Guidelines on Responsible Use of AI in Litigation (2024) (Austl.).
  • T.K. Viswanathan Committee, Report of the Expert Committee to Examine the Working of the Arbitration Law and Recommend Reforms (2023) (India).

Policy & Government Initiatives

  • Global Partnership on Artificial Intelligence (GPAI), India Participation Record (2023).
  • Government of India, India AI Mission Programme.
  • Government of India, Proposed Digital India Act – AI Regulatory Framework.
  • Ministry of Electronics and Information Technology (MeitY), AI Safety Institute Proposal (India).
  • National Digital Health Mission, Data Privacy & Security Framework (India).

III. SECONDARY SOURCES

Books & Corporate Reports

  • Apple Inc., The Illusion of Thinking: An Analysis of Large Reasoning Models (2024).
  • Bryan Cave Leighton Paisner LLP, Annual Arbitration Survey 2023 (2023).

Articles & Statements

  • Justice B.R. Gavai, Supreme Court of India, Statement on AI Integration in Courts (2024).
  • NITI Aayog, ODR Policy Plan for India (2021) (India).
  • Silicon Valley Arb. & Mediation Ctr. & Canadian Judicial Council, AI Risk Awareness Guidance (2024).

International Regulations

  • European Union, Artificial Intelligence Act, 2024 O.J. (L) 1689 (EU).

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top