Home » Blog » AI-GENERATED EVIDENCE IN INDIAN COURTS: EMERGING CHALLENGES AND THE REGULATORY VACUUM

AI-GENERATED EVIDENCE IN INDIAN COURTS: EMERGING CHALLENGES AND THE REGULATORY VACUUM

Authored By: CHARUPRIYA

NIMS School Of Law, Jaipur, Rajasthan

  • INTRODUCTION

Criminal probes in India are shifting fast as AI steps in. Alongside facial scans, machines now predict crimes – cyber sleuthing gets sharper through smart software while voice and picture tools dig deeper. Still, court rules rely on old laws like the 1872 Evidence Act and the 2000 IT Act – written way before AI could fake faces or tweak reality. Fake videos, computer ID guesses, and rebuilt images bring fresh data types that don’t slot easily into past ideas of “digital proof.” Judges now wrestle with whether to accept these outputs, how to check if they’re true, what weight to give them – this gap calls for quick action from lawmakers and courts so trials stay fair.

  • HOW AI TOOLS ARE SPREADING IN CRIME PROBES ACROSS INDIA

As AI takes on bigger tasks in spotting crimes, it’s changing how investigations happen in India – shifting methods one step at a time.

A. FAKE VIDEOS USED IN ONLINE SCAMS PLUS ABUSE TARGETING WOMEN

Deepfake tech is now widely used in online attacks, particularly for fake identities or sharing intimate images without permission. Law enforcement uses artificial intelligence to spot these fakes, checking if videos or photos are real – so AI ends up fighting its own creations.

B. FACE SCANNING – ALSO USED IN CRIME FORECASTING

The National Crime Records Bureau uses an automated facial recognition system that shows how widely AI is now used to identify people or keep watch. Instead of just relying on old methods, this tech helps track suspects but isn’t always right. Tools like it also guess where crimes might happen by spotting patterns over time. Even though they can speed things up, there’s worry about mistakes, who gets watched, and how easily someone could abuse them.

C. MAKING FAKE VOICES, FIXING BROKEN IMAGES – ALSO USING BODY TRAITS LIKE FINGERPRINTS

AI-powered voice coping’s made scams and fake identities harder to spot. Instead, crime labs now rely on smart software to sharpen fuzzy photos or piece together broken video clues. Alongside this,  biometric systems boosted by machine learning help catch suspects in terror plots, money crimes, or stolen IDs. Even so, laws haven’t caught up with what these AI-altered results actually mean in court.

  • WHAT SHOULD WE CONSIDER AS PROOF MADE BY AI?

AI-made proof means info, results, or analysis created, improved, or shaped by artificial intelligence tools – like when a machine helps build charts, detect patterns, or draft summaries instead of people doing it from scratch, such as:-

  • Deepfake detection reports

  • Predictive policing outputs

  • Face scanning uses math rules to compare

  • AI-powered pictures or clips remade with tech

  • A machine-made summary of digital crime checks

The Evidence Act from 1872(now BSA, 2023) doesn’t name or accept this type, which leaves a big gap in the law.

  • RULES THAT GOVERN: MISSING PIECES

A. THE 1872 EVIDENCE ACT

Sections 65A–65B, IEA (“Section 61, BSA 2023” – “Section 62, BSA 2023”) deal with digital proof and demand careful steps, especially that 65B(4) certificate. Still, they don’t cover tricky stuff made or changed by artificial brains. Fake videos shaped by code, machine-based ID calls, or guesswork from software slip through old-school verification rules.

B. THE LAW ABOUT COMPUTERS AND ONLINE ACTIONS FROM THE YEAR 2000

The IT Act accepts electronic files yet overlooks how AI-made material is produced or accepted in court. Instead of covering smart systems that create or analyze data, it sticks to verifying online identity and protecting networks. While it handles basic digital trust issues, it misses the bigger picture on artificial intelligence functions.

C. NO SET RULE FOR HOW WELL ALGORITHMS WORK

India doesn’t have a way to check if tech evidence is solid – unlike the U.S., where they use Daubert rules. Because there’s no law setting clear expectations for how open algorithms must be, judges can’t properly judge their fairness. Issues like skewed data or high mistake chances go unchecked. So when police rely on AI tools, court reviews end up weak.

  • BIG PROBLEMS WHEN USING EVIDENCE MADE BY AI

A. REALNESS BUT ALSO CHANCES OF BEING TRICKED

Deepfakes, along with synthetic videos made by AI, look almost real – making innocent people appear guilty. Old-school detection methods usually fail when tricks get too clever. Without clear rules to confirm what’s fake, the danger grows.

B. WHEN ALGORITHMS SHOW BIAS OR STAY UNCLEAR

AI tools tend to copy unfair patterns from the data they learn from, hitting vulnerable groups harder. Since private algorithms hide how they work, people can’t see what’s driving their decisions. That messes with fairness – when someone’s judged by a machine, they’ve got no real way to fight back.

C. ISSUES TRACKING WHO HANDLED THE EVIDENCE

Altered files from AI mess up standard proof trails. It’s tough to track exactly what the system did to create or change evidence – when steps aren’t recorded, doubts creep in, weakening trust or leading to rejection.

D. PROBLEMS WITH WITNESS EXPERTS

India doesn’t have enough trained AI forensics specialists who can clearly break down how algorithms work, their precision, or where they fall short. Since many judges aren’t well-versed in AI tools, grasping intricate technical arguments becomes a real challenge.

E. WORRIES ABOUT PRIVACY PLUS SPYING

Facial recognition tech grabs tons of private info – no clear laws watching it. That sparks big worries about privacy rights, plus could lead to unfair cop actions now and then.

F. TOUGHNESS IN QUESTIONING ACROSS SIDES

Because machines can’t answer questions under pressure, plus their thought process stays hidden, people accused of crimes can’t test how trustworthy computer-based proof really is – unlike statements from actual humans.

G. CAN BE REPEATED PLUS CHECKED

A single tweak in what you type can change what AI gives back. That messes up consistency – science needs repeated results to trust anything.

  • JUDICIAL RESPONSE

A. CAREFUL WAY OF HANDLING DIGITAL PROOF

In “P.K. Basheer v. State of Kerala, (2014)”, the top court said following “Section 65B(4)” must happen. Likewise, in “Anvar P.V. v. P.K. Basheer, (2014)”, then later on in “Arjun Panditrao Khotkar v. Kailash Kushanrao Gorantyal, (2020) – here, the court pushed tough rules for digital proof. Still, those decisions cover old-style files, not stuff made or changed by artificial intelligence.

B. NO CLEAR EXAMPLE YET ABOUT AI PROOF

No Indian court ruling yet deals head-on with whether Deepfakes, AI-made clips, or computer forecasts can be trusted as proof. Judges still lean on old digital evidence guidelines – these just don’t fit how advanced tech works these days.

Courts often doubt digital edits. Though they question how reliable such changes really are Courts are starting to question digital clips and online files – especially when deciding bail. While it’s not official law yet, that doubt shows people get how dangerous fakes can be.

  • COMPARATIVE PERSPECTIVE

A. UNITED STATES

U.S. courts use the “Daubert rule – set in Daubert v. Merrell Dow” – in which judges check if science is solid, tested by others, has low mistakes, or widely trusted. Meanwhile, certain states are starting to talk about special rules when it comes to Deepfakes.

B. EUROPEAN UNION

The new EU AI rule labels police-related artificial intelligence as “risky,” so it must meet tough rules around openness, info handling, real-person checks, also precision.

C. WHAT INDIA CAN LEARN

India should start using legal checks on how trustworthy AI tools are when police use them, while also making sure there’s clear oversight. Other countries show us that strong protections help avoid unfair outcomes – also keeping methods grounded in real evidence.

  • WHAT’S NEXT: IDEAS TO FIX LAWS

A. CHANGES TO THE EVIDENCE ACT

The law should clearly explain what counts as “evidence made by AI” while setting up its own rules for use in court – focusing on whether it’s real or altered, how the tech works behind the scenes, if it can be trusted, yet also making sure it holds up under scrutiny.

B. REQUIRED CHECKS ON ALGORITHMS

Independent checks on AI need to look at how accurate it is, what data trained it, any unfair biases, or if it can be tricked. The findings from these reviews should be easy to find and clearly explained.

C. NAT’L DEEPFAKE ANALYSIS RULES

The Ministry of Home Affairs ought to set up clear methods for spotting, examining, or sharing Deepfakes so all crime labs work the same way.

D. PROOF THAT SOMEONE KNOWS ABOUT AI CRIME CHECKS

Courts ought to use certified AI experts who understand how algorithms work – spotting biases while confirming results through testing.

E. WATCHDOG BODY

A nationwide group could keep an eye on how police use AI – making sure things stay fair while protecting people’s freedoms through clear rules that hold everyone responsible.

F. SOLID HANDLING OF INFORMATION

A solid system for keeping info private helps control how AI watches people, keeps things fair, or stops data from being abused.

  • CONCLUSION

AI’s changing how India handles crime cases, yet brings big dangers along. Lack of solid rules around AI-made proof puts fairness and basic rights at risk. Courts might trust data that’s flawed or skewed – especially without set guidelines or skilled checks. To fix this, new laws should pop up fast, inspired by what works worldwide. Staying ahead means overhauling systems now, well before tech takes full hold in courtrooms.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top