Home » Blog » Can Artificial Intelligence Trigger Article 20(3)

Can Artificial Intelligence Trigger Article 20(3)

Authored By: Maheshwari Mahananda

National Law University, Odisha

Abstract

Artificial Intelligence (AI) now plays a significant role in India’s criminal justice system. Police agencies employ facial recognition technology, predictive policing models, voice-stress analysis, and advanced digital forensic tools capable of reconstructing deleted communication. While these systems enhance investigative capacity, they also raise pressing constitutional concerns particularly relating to the protection against self-incrimination under Article 20(3) of the Constitution of India. Traditionally, Indian courts drew a clear distinction between “material evidence”, which may be obtained through compulsion, and “testimonial evidence”, which cannot. However, AI tools complicate this divide by transforming physical or passive data into interpretive conclusions about an accused’s intention, knowledge, behaviour, or mental state. This article evaluates whether AI-generated inferences can amount to compelled testimony under Article 20(3). It revisits landmark cases such as State of Bombay v Kathi Kalu Oghad and Selvi v State of Karnataka, analyses modern AI processes and constitutional doctrine, and proposes a functional test and procedural safeguards to ensure technological progress does not undermine mental privacy, voluntariness, and fair trial rights.

Introduction

Artificial Intelligence has shifted from being an experimental innovation to a routine feature of criminal investigations in India. Police departments in Delhi, Telangana, Uttar Pradesh, and Maharashtra have begun integrating facial recognition systems into CCTV networks; forensic laboratories increasingly utilise AI-based tools capable of recovering deleted messages and reconstructing digital timelines; and interrogation rooms witness the deployment of voice-based deception analysis technologies.

AI thus transforms three core aspects of policing:

  • Detection – identifying suspects rapidly using pattern recognition.
  • Reconstruction – piecing together deleted or fragmented digital data.
  • Interpretation – inferring emotions, deception, or behavioural probabilities.

However, these enhanced capabilities also challenge long-settled constitutional principles. Article 20(3) provides that “No person accused of any offence shall be compelled to be a witness against himself.” The interpretation of this right historically rested on a rigid distinction: physical evidence may be compelled, but testimonial evidence can not expressing mental content. AI, however, blurs this line by turning physical input (a face scan, voice sample, behavioural pattern) into interpretive output (stress analysis, deception prediction, behavioural inference). 

If such outputs reflect knowledge, intention, or mental processes, they raise the question:

Can an AI system produce “testimony” on behalf of the accused, thereby triggering Article 20(3)?

To address this question, this article explores doctrinal foundations, evaluates contemporary AI mechanisms, identifies legal gaps, and proposes a functional constitutional framework suited to the digital era.

I: Article 20(3)

The Three Essential Requirements

Indian constitutional jurisprudence recognises that Article 20(3) protects an accused only if three elements exist:

  1. The person must be formally an accused.
  2. There must be compulsion.
  3. The compelled act must produce testimonial evidence.

Thus, only compelled testimonial evidence is prohibited.

Material vs Testimonial Evidence

Material Evidence

Material evidence includes fingerprints, blood samples, handwriting, DNA, footprints, and other physical characteristics. Such evidence does not reveal the mental content of the accused and may be compulsorily obtained.

Testimonial Evidence

Testimonial evidence refers to statements or acts revealing personal knowledge, intention, or mental processes.  This category can not be compelled.

The difficulty arises when AI transforms material input into testimonial output.

Kathi Kalu Oghad: The Foundational Rule

In State of Bombay v Kathi Kalu Oghad, the Supreme Court held that Article 20(3) protects only those acts which reveal the “contents of the mind”. Physical evidence exists independent of the accused’s volition, whereas testimonial evidence involves communication.

This distinction material vs testimonial structurally dominates Indian self-incrimination doctrine.

Selvi v State of Karnataka: The Mental Privacy Expansion

In Selvi, the Court struck down compulsory narco-analysis, polygraph tests, and brain-mapping because they extracted mental content involuntarily.

The Court emphasised:

  • cognitive liberty,
  • mental privacy,
  • voluntariness, and
  • freedom from intrusive psychological techniques.

This case expanded the meaning of testimonial evidence beyond spoken confessions to include mental processes.

 II: Artificial Intelligence in Indian Criminal Investigations

AI has become increasingly embedded in policing workflows. The following systems are most relevant to Article 20(3):

Facial Recognition Systems (FRS)

FRS compares CCTV images with police databases and identifies suspects.
FRS may be material evidence when simply identifying a face, but when combined with behavioural analytics (e.g., “frequent presence near crime-prone areas predicts intent”), it becomes interpretive, edging toward testimonial inferences.

AI-Based Digital Forensics

Modern forensic tools:

  • reconstruct deleted messages,
  • recover digital drafts,
  • restore incomplete audio logs,
  • decrypt or guess passwords probabilistically,
  • map digital timelines.

Reconstructed communication especially deleted chats or emails, often contains mental content such as intention, knowledge, or admissions. If such reconstruction is enabled by compelled unlocking, Article 20(3) may be violated.

Voice and Emotion Analysis

These systems measure:

  • tone
  • micro-tremors
  • pitch variation
  • emotional states
  • stress indicators

to infer deception or guilt. Such analysis resembles polygraph tests and may be prohibited under the logic of Selvi. If the accused is compelled to speak specific phrases, the output becomes testimonial.

Predictive Policing Tools

Predictive tools assign behavioural probabilities or “risk scores.” If these outputs infer intention or likelihood of offending, they essentially generate statements about the accused’s mental state.

Communication and Network Analysis

AI clusters communication logs to deduce roles and involvement.

These outputs may be:

  • descriptive (timeline reconstruction), or
  • testimonial (inferring knowledge or involvement).

III: Can Artificial Intelligence Trigger Article 20(3)? 

The central inquiry is whether AI-generated output can amount to testimonial evidence for the purposes of Article 20(3). 

Situations Where AI Does Not Trigger Article 20(3)

Passive Analysis of Pre-Existing Data

Where AI processes information that already exists independently of the accused, such as CCTV images, call logs, or public social-media posts, there is no compulsion and no communicative act. These outputs therefore do not qualify as testimonial evidence.

When No Act Is Required from the Accused

If police collect external data without requiring participation (e.g, analysing location metadata or scanning public footage), the accused is not compelled to act. There is no violation of Article 20(3).

Where AI Outputs Are Purely Descriptive

If an AI system provides only descriptive physical information such as identifying whether a person appears in a video its output does not reflect mental content.

Situations Where AI May Trigger Article 20(3)

When Compelled Interaction Is Required

If the accused is compelled to:

  • speak into a voice analysis tool,
  • smile, frown or show expressions for FRS-emotion detection,
  • provide repeated typed responses for behavioural analysis, or
  • undergo voice-stress or deception detection,

the resulting output is a product of compelled participation.
Given Selvi, any investigative practice compelling a person to perform a cognitive or communicative act may violate Article 20(3).

AI Reconstruction of Deleted Testimonial Content

When AI reconstructs deleted chats, voice notes, emails, or drafts, the output consists of testimonial content reflecting the mental state of the accused.
If the reconstruction was enabled through compelled unlocking of the device (e.g., forced biometric authentication), this becomes involuntary testimonial extraction.

AI Inferences About Mental State

AI tools that analyse:

  • deception likelihood,
  • emotional shifts,
  • behavioural tendencies,
  • likelihood of guilt,

directly generate conclusions about the mental processes of the accused. These inferences are testimonial in nature because they reveal (or purport to reveal) mental content.

Compelled Material Input with AI Interpretive Output

Even if the compelled input is material (like a face scan), the output may still become testimonial, if it interprets mental state or communicates knowledge.
This scenario is new and unprecedented in Indian doctrine, which did not anticipate algorithmic interpretation.

IV: Why AI Complicates Traditional Self-Incrimination Doctrine

AI Adds Layers of Interpretation

Unlike traditional forensic tools, AI:

  • identifies patterns invisible to humans,
  • assigns meaning to behaviour,
  • makes predictions,
  • classifies psychological states.

Such interpretive outputs often resemble testimonial statements.

AI May Generate Testimony the Accused Never Gave

AI can infer:

  • intention (likely to be involved).
  • emotion (high stress during questioning).
  • deception (“90% likelihood of lying”).
  • behavioural predispositions (“high-risk offender profile”).

This raises a constitutional question:

Can the State use technology to create testimony on behalf of the accused?

Opaqueness and Lack of Explainability

Many AI systems function as “black boxes” their decision-making processes are not transparent. This challenges:

  • cross-examination rights,
  • reliability assessment,
  • admissibility evaluation under Sections 45A and 65B of the Evidence Act.

Increased Persuasiveness of AI Evidence

Judges may over-rely on AI outputs because they appear objective, scientific, or data-driven despite high error rates and bias.

Existing Doctrine Did Not Anticipate AI

Kathi Kalu Oghad dealt with simple physical evidence.

Selvi dealt with intrusive psychoanalytic techniques.

Neither case considered algorithmic inference or digital reconstruction.

This doctrinal gap necessitates a re-evaluation.

V: India’s Legal Gaps in Regulating AI in Criminal Procedure

No Statutory Definition of AI Evidence`

The Indian Evidence Act, 1872 and CrPC contain no provisions defining AI-generated evidence or specifying admissibility standards.

No Reliability or Transparency Standards

Courts lack guidance on:

  • acceptable error margins,
  • bias assessment,
  • reproducibility requirements,
  • algorithmic explainability.

Without these standards, algorithmic inferences may enter courtrooms unchecked.

No Warrant Requirements for AI Forensic Extraction

AI-driven device-level forensics often occur without judicial oversight, enabling deep data extraction without safeguards.

Mental-State AI Is Unregulated

AI emotion analysis, deception prediction, and behavioural scoring have no statutory limits, despite their proximity to prohibited polygraph and narco-analysis techniques.

Institutional Under-Preparedness

Neither police nor judiciary are equipped to interrogate:

  • model bias,
  • training data limitations,
  • system accuracy,
  • probabilistic interpretations.

This collective lack of literacy endangers fair trial rights.

VI: A Proposed Constitutional Framework for AI and Article 20(3)

To ensure doctrinal clarity, this article proposes the following three-part test for determining whether AI-triggered processes should fall within the scope of Article 20(3):

Test 1: Compulsion

Was the accused compelled to:

  • provide biometric input?
  • unlock a device?
  • speak or perform actions for analysis?
  • cooperate with interrogation-driven AI?

If yes, the first requirement is satisfied.

Test 2: Mental Content

Does the AI output:

  • reconstruct communications?
  • infer emotions or deception?
  • reveal knowledge, intention, or thought process?
  • piece together testimonial digital traces?

If yes, the output is testimonial regardless of whether the input was material.

Test 3: Interpretation or Transformation

Does the AI:

  • merely describe data? (material)
  • OR interpret, classify, predict, or infer? (testimonial)

If interpretive, Article 20(3) is likely implicated.

Combined Effect

If any two of the above are satisfied, the entire process should be constitutionally protected under Article 20(3).

Procedural Safeguards

  1. Judicial warrants for device-level AI forensics.
  2. Transparent disclosure of algorithms and error rates.
  3. Restrictions on predictive behavioural profiling.
  4. Mandatory voluntariness checks for AI-assisted interrogations.
  5. Independent AI audits of police tools.
  6. Judicial training on AI biases, limitations, and probabilistic outputs.

AI must be integrated into criminal procedure without compromising mental privacy and autonomy.

VII: Conclusion

Artificial Intelligence has the potential to revolutionise India’s criminal justice system by improving investigative efficiency, expanding digital forensic capability, and enhancing public safety. However, AI also introduces unprecedented threats to constitutional rights particularly the guarantee against compelled self-incrimination under Article 20(3).

The traditional doctrinal distinction between material and testimonial evidence, established in Kathi Kalu Oghad, becomes increasingly inadequate in the face of modern AI systems that infer mental states, reconstruct deleted communication, and generate behavioural predictions. Similarly, while Selvi expanded the scope of mental privacy protection, the jurisprudence did not fully anticipate technologies capable of deriving psychological or behavioural insights algorithmically.

This article demonstrates that AI can trigger Article 20(3) when:

  1. the accused is compelled to provide biometric or behavioural input.
  2. AI outputs reconstruct mental content, emotions, knowledge, or intention.
  3. the system interprets, predicts, or transforms data into quasi-testimonial conclusions.

AI-generated inferences especially those suggesting deception, intention, involvement, or guilt are not merely descriptive outputs. They amount to technologically generated testimony.
If the police obtain such outputs through compulsion, the constitutional protection should apply.

Given the absence of statutory standards governing AI evidence, India urgently requires procedural safeguards. These include judicial warrants for device-level AI searches, disclosure of algorithmic methodologies, limits on behavioural profiling, and mandatory judicial training. Only through such reforms can India ensure that technological advancement does not erode fundamental rights or undermine due process.

Ultimately, the criminal justice system must embrace innovation without compromising constitutional morality. AI should remain a tool of justice not a mechanism for involuntary self-incrimination.

Bibliography 

Cases

District Registrar and Collector v Canara Bank (2005) 1 SCC 496.

KS Puttaswamy v Union of India (2017) 10 SCC 1.

PUCL v Union of India (1997) 1 SCC 301.

Ritesh Sinha v State of Uttar Pradesh (2019) 8 SCC 1.

Selvi v State of Karnataka (2010) 7 SCC 263.

State of Bombay v Kathi Kalu Oghad AIR 1961 SC 1808.

Legislation

Constitution of India 1950, art 20(3).

Indian Evidence Act 1872, ss 45A, 65A–65B.

Books

Andrew Ashworth, Principles of Criminal Law (OUP 2021).

Journal Articles

Lawrence Solum, ‘Artificial Meaning’ (2020) 89(3) Fordham Law Review 501.

Reports and Policy Documents

Amnesty International, Automated Policing and Human Rights in India (2021).

Ministry of Home Affairs, AI for Policing: Vision Document (2021).

NITI Aayog, National Strategy for Artificial Intelligence #AIforAll (2018).

OECD, State of AI Governance Report (2022).

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top