Home » Blog » Deepfakes and Digital Evidences: A New Challenge to the Indian Judiciary

Deepfakes and Digital Evidences: A New Challenge to the Indian Judiciary

Authored By: Anshuman Singh

Faculty of Law,University of Allahabad

Abstract

The rapid advancement of technology and artificial intelligence has brought forth deepfake technology — hyper-realistic synthetic media that can manipulate audio, video, and images with alarming precision. While this innovation has applications in entertainment and education, it poses serious threats to the integrity of digital evidence in legal proceedings. In the Indian judicial context, where digital evidence increasingly plays a pivotal role in both civil and criminal trials, the rise of deepfakes presents a profound challenge. So there is need to explore the legal, ethical, and technological dimensions of deepfakes and their impact on the admissibility and reliability of digital evidence in Indian courts,also there is need to examine the existing legal frameworks such as the Indian Evidence Act, 1872, and the Information Technology Act, 2000, assessing their adequacy in addressing synthetic media manipulation. Hence there is need for rapid responses, and the need for robust forensic capabilities, legislative reform, and judicial training.

Key words: Deepfakes; digital evidence; Indian judiciary; electronic records; Indian Evidence Act; chain of custody; admissibility; forensic verification; synthetic media; AI‑generated evidence.

Introduction

In recent years, the judicial landscape of India has witnessed a dramatic evolution: digital evidence, audio recordings, videos, social‑media captures, metadata logs—has increasingly assumed a central role in both criminal and civil proceedings. Courts have recognised that in many cases, especially where eyewitness testimony is absent or unreliable, electronically generated or stored data may prove decisive. For instance, as noted in a seminar at the Uttar Pradesh State Institute of Forensic Sciences, digital evidence has become a decisive tool particularly when no eyewitnesses are available

The Indian judicial system is in the midst of transformation. With increased use of digital technology, courts routinely confront electronic records—CCTV footage, mobile phone data, chat logs, social‑media posts, server records—as key pieces of evidence. The acknowledgement of digital evidence’s import is reflected in discussions at Indian forensic institutes and judicial seminars.

At the same time, the same digital revolution has spawned an insidious and rapidly proliferating threat—deepfakes. These are AI‑generated or AI‑manipulated audio, video or image‐based media so realistic that they can mimic actual persons or events with alarming fidelity. A judge of the Supreme Court of India observed that “the emergence of deepfake technology … is ground‑breaking but raises alarms regarding privacy invasion, security risks and propagation of misinformation.

When the judiciary relies on electronic records to ascertain facts, the possibility that those records have been manipulated threatens to undermine the fact‐finding process itself. In other words: what was once viewed as ‘hard’ evidence may no longer be reliable unless the courts adapt to new threats.

Thus it is important to deep dive into the challenge that deepfakes present to digital evidence in India: we begin by unpacking the nature and technical features of deepfakes also determining the challenges and implications it can pose to our society and to our judicial system.

The Emergence of Deepfakes What are deepfakes?

The term “deepfake” combines “deep learning” and “fake” and refers to media created or modified through AI techniques especially generative adversarial networks (GANs) or auto‑encoders so as to replicate or mimic a person’s likeness, voice, gestures or facial expressions. Because the technology increasingly allows generation of content that is virtually indistinguishable from real recordings, the term encompasses a spectrum: from face‑swapped videos and voice‑cloned audio, to fully synthetic individuals.

Technically, deepfakes are problematic because (1) generation is rapidly improving, (2) detection is often reactive and hard, and (3) manipulation can occur in multiple layers (video + audio + context). A recent academic dataset for India was developed: a Hindi‑language video‑audio deepfake dataset “Hindi audio‑video‑Deepfake (HAV‑DF)” to reflect the language‑specific vulnerability in India.

Why the Indian context is especially vulnerable

Several features amplify the risk of deepfakes in India:

  • Large mobile and social‑media penetration, meaning manipulated content can spread widely and quickly.
  • Variable levels of digital literacy, particularly in semi‑urban and rural areas where synthetic video may be accepted at face value.
  • A judiciary that has increasingly embraced digital evidence but may not have matched capacities in forensic verification.
  • Political stakes are high: media manipulation (including deepfakes) in electoral or public order contexts is a genuine threat. For example, viral AI‑generated videos of Indian celebrities were disseminated to influence public‑opinion in the 2024

Recent legal / judicial acknowledgement

The Supreme Court of India, through Justice Hima Kohli, publicly pointed out that deepfake technology’s indistinguishability from reality undermines authenticity of information and jeopardises identity and reputation. A recent Indian commentary on deepfakes notes the lack of specific regulation and the reactive nature of Indian law.

Thus, deepfakes are no longer hypothetical; they are present, accessible and pose real threats to digital evidence integrity.

The Legal & Institutional Framework for Digital Evidence in India

Key statutes and their relevance Indian Evidence Act, 1872

Sections 65A and 65B of the Evidence Act address electronic records. Section 65B provides that an electronic record can be admitted if a certificate is furnished in the prescribed format about the origin and integrity of the record. This mechanism was introduced to ensure authenticity of digital evidence. However, the digital‑Era challenges such as synthetic media were not envisaged at the time of drafting.

Information Technology Act, 2000

The IT Act addresses cyber offences, intermediaries and electronic governance. Provisions such as Section 66C (identity theft), 66D (cheating via computer), 67/67A (obscenity in electronic form) have been invoked in cases of synthetic media.

Recent statutes: Bharatiya Sakshya Adhiniyam, 2023 & Bharatiya Nyaya Sanhita, 2023 

The recent reform of evidence and criminal statutes seeks to modernise India’s laws. The BSA (Evidence Bill) expands definitions to include digital/electronic records as “documents” and aims to streamline admissibility.

Challenges in practice

Though these statutes provide a framework, deepfakes expose gaps:

  • No specific legal definition of “deepfake” or “AI‐generated synthetic media” in Indian
  • Admissibility under Section 65B often relies on certificate by a person in control of the device; but when the media has been manipulated via AI, origin and integrity are
  • Forensics infrastructure in law‑enforcement / judicial system is still evolving; detecting deepfakes requires specialized tools and expertise.
  • Intermediary liability and platform regulation (via IT Rules, 2021) provide for content takedown, but do not address deepfake creation or proactively prevent circulation.

In sum, while India has a legitimate legal scaffold for digital evidence, that scaffold was not built with the possibility of AI‑generated synthetic media in mind. This mismatch is precisely what creates the new challenge.

How Deepfakes Threaten Digital Evidence and Judicial Fact‑Finding Authenticity and Reliability

A foundational principle of evidence law is that submitted material reflects what it purports to. With deepfakes, a recording may appear to show a person uttering words or doing acts that never occurred. The court must then discern whether the media is genuine or manipulated. As one commentary puts it, “what appears to show something may never have occurred.”

In digital‑evidence cases, electronic records gained credibility because they were considered less fallible than human testimony. Deepfakes threaten to reverse that perception.

Chain of Custody and Integrity

Digital evidence’s value also depends on preserving a credible chain of custody and proving it hasn’t been tampered with. Deepfakes complicate this because:

  • The manipulated media may look
  • The origin may be hidden (e.g., deepfake generation overseas, hosted on foreign platform).
  • Standard certification under Section 65B may not suffice to guarantee integrity when the evidence is synthetically created.
  • Legal commentary notes that “at the moment, India lacks a specialised legal or forensic framework to verify the veracity of AI‐powered audio or video evidence.”

Admissibility and Burden of Proof

Admissibility of electronic records under Indian law depends on satisfaction of conditions (certificate, origin, alteration‑free). With deepfakes, the party presenting the evidence may face a defence challenge that the media is manipulated; this potentially raises doubts about admissibility or reliability. For instance, the “liar’s dividend” concept arises: the defence may argue even genuine evidence is fake because deepfakes exist, thereby raising reasonable doubt.

Investigative and Forensic Capacity

Courts and investigators need tools (technical, procedural) to detect manipulation: frame‑analysis, metadata scrutiny, forensic watermarking, hash‑checks, AI‑detection tools. However, India’s forensic capacity is still developing; the pace of deepfake creation is faster than forensic availability. “Courts aren’t ready for AI‑generated evidence…” reports that forensic analysts warn even they cannot trace chain of custody for deepfakes reliably.

Defence Strategy and Judicial Trust

Deepfakes allow new strategies: a defendant might claim that incriminating media is a fake deepfake. This complicates judicial assessment of evidence and may erode confidence in digital media. The erosion of trust is systemic: if courts begin to discount or scrutinise all popular visual/audio records, litigants and the public may lose faith in the evidentiary value of digital records.

Broader Impact on Judicial Process

  •  Delay and cost: verifying authenticity may require expert forensic analysis, increasing litigation costs and causing delays.
  • Evidentiary weight shift: courts may place less reliance on prima facie strong electronic records, tilting back to traditional testimony or circumstantial evidence.
  • Access to justice concerns: both prosecution and defence may be disadvantaged—resources for forensic checks are unevenly available across states and socio‑economic
  • Public credibility: In high‑profile cases (political, electoral, reputation), circulation of deepfakes may influence public perception regardless of evidence reliability, affecting institutional trust.

Illustrative Indian Developments

While there is limited Indian jurisprudence explicitly dealing with deepfake evidence, several events point to the growing recognition of the risk.

  • In April 2024, AI‑generated videos of Bollywood actors criticising the Prime Minister and urging votes to opposition parties went viral, raising concerns of synthetic‑media interference in elections.
  • Legal commentary notes that Indian courts are increasingly confronted with matters where media authenticity is contested, and deepfakes blur the line of admissibility.

These developments suggest that the Indian judiciary must anticipate more frequent instances of manipulated digital evidence.

Reform Imperatives for the Indian Judiciary

Given the threat posed by deepfakes, the following reform streams are recommended:

1.  Legislative and Regulatory Reform

  •  Definition of deepfakes and synthetic media: Indian law should explicitly define “deepfake” or similar AI‑generated manipulated media in relevant statutes, distinguishing between benign uses (satire, parody) and malicious manipulation.
  • Amendment to evidence law: The BSA and Evidence Act could include special provisions for synthetic‑media evidence: e.g., mandatory disclosure of AI‑generation, forensic certificate of authenticity, presumptive inadmissibility unless authenticated.
  • Platform/intermediary liability: While the IT Rules 2021 prescribe takedown norms, new rules may place proactive obligations on intermediaries to flag, label or remove known deepfakes, especially in contexts of elections, defamation or pornography.

2.  Forensic Infrastructure and Standard Protocols

  •  National forensic standard: Create uniform protocols for detecting deepfakes: metadata hash‑checks, AI‑detection tools, watermarking at origin, chain‑of‑custody logs for digital
  • Training and capacity building: Law‑enforcement agencies, forensic labs and judges must be trained in synthetic‑media detection, significance of chain of custody, expert evidence evaluation.
  • Certification of experts: Courts should compile a roster of accredited deepfake forensic experts and set guidelines for fair cross‑examination of such

3.  Judicial Procedures and Best Practices

  • Pretrial screening of digital evidence: Introduce mechanisms for preliminary verification of submitted video/audio evidence, including disclosure of creation‑history, modification logs, device/format metadata.
  • Admissibility checklists: Judges should ask for: (a) original uncompressed media, (b) device logs, (c) certificate of authenticity, (d) expert comment on manipulation risk.
  • Burden shifting where appropriate: In cases of high‑risk media (e.g., one‑off video of alleged crime), consider shifting burden to proponents to show authenticity, or require corroboration via independent evidence.

4.  Awareness, Public Policy and Access to Justice

  •  Public awareness campaigns: Educate citizens about the possibility of synthetic media, particularly in electoral, reputational and intimate contexts.
  • Access to forensic support: Ensure that weaker litigants (poor, rural) have access to forensic verification tools so the deepfake‑threat does not become a class‑based digital‑justice
  • International cooperation: Given hosting/origin often cross‑borders, India must engage with global norms on synthetic‑media detection, cross‑border data access and deepfake

5.  Safeguarding Judicial Credibility

  •  The judiciary must publicly acknowledge the deepfake risk and signal consistent standards for digital evidence. This enhances trust and deters misuse. The Supreme Court’s acknowledgment (Justice Kohli) marks a positive start.
  • Judges should issue reasoned findings when they admit or exclude digital evidence, citing forensic verification—or lack thereof—to build jurisprudence on deepfake‑

Conclusion

The proliferation of deepfakes marks a critical juncture for the Indian judiciary’s digital‑evidence jurisprudence. For decades the shift from paper to digital evidence was seen as a certainty and boon; now, the shift faces new risk: those same digital records may be fabricated or manipulated with little trace. The current Indian legal framework—though commendably modernised—was not designed to confront synthetic‑media. Without targeted legislative, procedural and institutional reforms, the risk is not just that a few cases will be compromised, but that the credibility of digital evidence will erode.

Yet this challenge also presents an opportunity. By adopting robust forensic protocols, enhancing judicial literacy, and updating the law to address AI‑driven manipulation, India can emerge as a model for managing the deepfake threat. The integrity of justice requires that digital evidence—even in an age of AI‑generated deception—remains credible, reliable and worthy of trust.

If courts, lawmakers and investigators act now, the Indian judiciary can transform this threat into a catalyst for technological and procedural modernisation—and thereby safeguard the promise of justice in the digital era.

Bibliography

  1.  “Emergence of Deepfake Technology Cause of Deep Concern: Supreme Court Judge,” NDTV(Dec. 9, 2023),

https://www.ndtv.com/india-news/emergence-of-deepfake-technology-cause-of-deep-concern-su preme-court-judge-4649510/amp/1.

  1. SC Judge voices concern over online harassment, says deepfake tech raises privacy invasion alarms,Times of India (Dec. 9, 2023)

https://timesofindia.indiatimes.com/india/sc-judge-voices-concern-over-online-harassment-says- deepfake-tech-raises-privacy-invasion-alarms/articleshow/105864778.cms.

  1. “Deepfakes of Bollywood stars spark worries of AI meddling in India election,” Reuters (Apr. 22, 2024),

https://www.reuters.com/world/india/deepfakes-bollywood-stars-spark-worries-ai-meddling-india-election-2024-04-22/.

  1. “Legal Challenges of Deepfake Technology and AI‑Generated Content in India,” Jus Corpus (2024),

https://www.juscorpus.com/legal-challenges-of-deepfake-technology-and-ai-generated-content-i n-india/.

  1. “The Impact of Deepfake Technology: Legal Risks and Regulatory Solutions,” Mondaq (2024),

https://www.mondaq.com/india/new-technology/1550822/the-impact-of-deepfake-technology-le gal-risks-and-regulatory-solutions.

  1. “AI‑Generated Evidence in Indian Courts: Admissibility and Legal Challenges,” Law Jurist (Jul. 2, 2025),

https://lawjurist.com/index.php/2025/07/02/ai-generated-evidence-in-indiancourts‑admissibility‑ and‑legal‑challenges/.

  1. “7 Alarming Ways Deepfake Evidence Impacts Court Cases & How to Fight Back,” The Kanoon Advisors(2024),

https://thekanoonadvisors.com/7-alarming-ways-deepfake-evidence-impacts-court-cases-how-to- fight-back/.

  1. “Deepfakes, Dignity, and the Delhi High Court India’s Digital Rights Turning Point,” The IP Press(Oct. 17, 2025),

https://www.theippress.com/2025/10/17/deepfakes-dignity-and-the-delhi-high-court-indias-digita l-rights-turning-point/.

  1. “Deepfakes, legal    perspectives    on    crime,    consent,    and    misinformation    in    India,” Prime Legal Law FirmBlogs(2024),

https://blog.primelegal.in/deepfakes-legal-perspectives-on-crime-consent-and-misinformation-in- india/.

  1. “Threat of        fake        AI        generated        evidence,”        Reddit       (Sept. 22, 2024), https://www.reddit.com/r/LegalAdviceIndia/comments/1fmsoip.
  2. Arvind Gupta & Colleagues, “Hindi audio‑video‑Deepfake (HAV‑DF): A Hindi language‑based         Audio‑video         Deepfake         Dataset,”         arXiv         (Nov. 23, 2024), https://arxiv.org/abs/2411.15457.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top