Home » Blog » ARTIFICAL INTELLIGENCE- GENERATED EVIDENCE AND ITS ADMISSIBILITY UNDER THE INDIAN EVIDENCE ACT 1872

ARTIFICAL INTELLIGENCE- GENERATED EVIDENCE AND ITS ADMISSIBILITY UNDER THE INDIAN EVIDENCE ACT 1872

Authored By: Taranjeet Kaur

Himachal Pradesh National Law University, Shimla

Abstract

The accelerated introduction of Artificial Intelligence (AI) in the real world has started to have effects in the legal system that have never before been seen. Not only facial recognition software and predictive policing tools to AI-generated audio, video and documents are making courts increasingly face evidence produced or processed by machines, instead of humans. This advancement has posed very serious concerns on admissibility, authenticity, reliability, and accountability in the prevailing legal systems. The admissibility of evidence in India is controlled by the Indian Evidence Act, 1872 (IEA), which precedes the development of AI technologies by a great margin. It is a critical analysis of the question of whether AI-generated evidence is covered under the current statutory scheme of the Indian Evidence Act and analyses any applicable judicial precedents and examines the legal issues presented by algorithmic decision-making. It claims that although the existing framework can somewhat support AI-generated evidence, legal clarification and judicial principles are needed to deliver the justice, transparency, and due process.

Introduction

The technological progress has been a thorny issue to legal systems, where they have been forced to change their old virtues to match the new reality. Artificial Intelligence is one of the most radical changes in the twentieth century, which took place in the sphere of technology. The current AI systems can produce text, images, video, voice recordings and even conclusions of the analysis with minimum human input. These are some of the outputs that are being more and more relied on in studies, lawsuits and dispute resolution.

Already the Indian courts are starting to deal with the electronic evidence by the use of Information Technology Act, 2000 and the Indian Evidence Act, 1872. Nevertheless, the AI-generated evidences are not the same as the standard electronic documents because they are not just stored or relayed by the computer but are actively produced using the algorithms. This difference would present some authorship, accuracy, bias, and accountability issues.

The present article aims to examine the admissibility of the AI-generated evidence according to the Indian Evidence Act, 1872, the evidentiary test that should be applied to such evidence, and the existing gaps in the legislation.

Learning AI-Generated Evidence.

AI-generated evidence is content that has been authored by artificial intelligence systems, which means that it is generated, analysed, or edited by a non-human automated system. Common examples include:

  • Images and videos (or audio) produced via AI.
  • Facial recognition results produced by the system are automated.
  • Criminal investigation predictive analytics.
  • AI generated reports and transcripts.
  • Risk evaluation tools that are algorithmic.

The products of AI, in contrast to the traditional electronic evidence, are influenced by the training data, machine learning algorithms, and probabilistic arguments. Consequently, their trustworthiness is pegged on not only the hardware and computer programs in play, but also on data integrity, transparency in algorithms, and bias.

Statutory Laws in the Indian Evidence Act, 1872.

Electronic Evidence and 65A and 65B.

Artificial intelligence has not been mentioned in the Indian Evidence Act. The admissibility of electronic records is, however, covered by Section 65A and Section 65B. Section 65B states that electronic record can be used as evidence but with a certificate on how the record is produced and how authentic it is.

In the case of Anvar P.V. v. P.K. Basheer (2014), the Supreme Court made it clear that the electronic evidence must comply with Section 65B. This view was confirmed in Arjun Panditrao Khotkar v. Kailash Kushanrao Gorantyal (2020).

The AI-generated evidence would prima facie constitute the subject of the Section 65B, as this is electronic in nature. Nevertheless, there are practical challenges in determining who the person is issuing the certificate in particular when AI system produces the output on its own.

Difficulties in Admission of AI-generated evidence.

  • Authorship and Responsibility.

The conventional evidence presupposes the presence of a human writer or producer. When it comes to the AI-generated content, the developer, operator, user, or the AI system itself is not clear who is responsible. Non-human entities are not considered as legal persons that can be used as authors of a document by the Indian Evidence Act.

  • Reliability and Accuracy

It is possible that the AI systems will generate inaccurate or biased results because of poor training data or constraints of the algorithm. The judges have to determine the reliability of such evidence and its probative value according to Section 3 and 45 of the Evidence Act.

  • An Expert Witness under 45.

Section 45 gives the courts the liberty to use expert opinions on cases that involve science or technology. Evidence provided by AI can and frequently needs expert testimony explaining how the algorithm operates, how the data was analyzed and whether the result can be relied upon.

Nevertheless, overreliance on the professional reading can jeopardize the independence and transparency of the judiciary.

  • False News and Fake Evidence.

AI has allowed producing fake audio and video recordings that are highly realistic. This kind of material is very dangerous to the integrity of the justice system. In the absence of strong checks and balances, the courts may accept evidence that is fabricated.

Judicial Approach in India

Indian courts have been very tentative in accepting technological evidence. In State (NCT of Delhi) v. Firstly, the Supreme Court permitted the admissibility of electronic evidence without a rigid adherence to Section 65B but this was overruled (Navjot Sandhu, 2005).

Courts have over the past years laid stress on authenticity and procedural protection. Although the legal ruling on AI-generated evidence has not been reached yet, the reasoning of the courts implies that the courts would demand that the evidentiary standards are strictly complied with, but that expert validation would be added to the list.

Comparative Perspective

Other jurisdictions in the world are facing the same challenges. The European Union has also offered the Artificial Intelligence Act, which puts emphasis on transparency and accountability. In the US, judicial bodies consider the evidence of AI by applying different criteria, including the Daubert test that determines scientific reliability.

The trend shows that India needs to have a progressive mindset that balances the innovation and legal certainty.

Need for Legal Reform

Litigants and courts have no certainty about AI-generated evidence, as there is no clear statutory guidance. Key reforms may include:

  • AI-created evidence should be recognized under the law.
  • Explicit certification prescriptions of Section 65B.
  • Algorithms It is required to disclose all algorithmic processes in sensitive cases.
  • Training of judicial personnel in issues to do with technology.
  • Protections against using deepfakes and doctored evidence.

Conclusion

The nature of evidence and the application of justice is being transformed by artificial intelligence. Although the Indian Evidence Act, 1872 offers a basic groundwork on electronic evidence, it lacks the capacity to comprehensively deal with the issues of AI-generated material. Such evidence can, provisionally, be reviewed by the courts with the help of Sections 65B and 45, which is still an imperfect solution.

India should actively overhaul its evidentiary legislation and create judicial principles to use AI-generated evidence to be fair, accurate, and socially acceptable of the legal system. The future of justice in the digital age will have been determined by a balanced approach that would not ignore the importance of technological advancement but protect due process.

Reference(S):

  • Indian Evidence Act, 1872
  • Information Technology Act, 2000
  • Anvar P.V. v. P.K. Basheer (2014) 10 SCC 473
  • Arjun Panditrao Khotkar v. Kailash Kushanrao Gorantyal (2020) 7 SCC 1.
  • State (NCT of Delhi) v. Navjot Sandhu (2005) 11 SCC 600
  • Surden, H., Machine Learning and Law’ (2014) 89 Washington Law Review 87.
  • European Commission, Proposal to a Regulation Laying down harmonised rules on Artificial Intelligence (AI Act), 2021.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top