Authored By: Leya Mahesh
St. Joseph’s College of Law
ABSTRACT
The criminal court system in India is facing previously unheard-of difficulties as a result of the quick development of deepfake technology. Deepfakes, which are artificial intelligence powered, may create incredibly lifelike audio and video content that can deceive the public, courts, and investigators. This study looks at the forensic and evidential ramifications of deepfakes in the Indian criminal court, specifically in light of the Bharatiya Sakshya Adhiniyam, 2023, which regulates the admission of electronic evidence. In addition to assessing recent Indian instances and examining how current forensic techniques and judicial systems fail to uncover AI-generated manipulation, it also assesses holes in the legal and technological frameworks already in place. The study’s conclusion highlights the pressing need for forensic innovation, specialist legal reforms, and judicial training in order to protect the integrity of the evidence and guarantee fair trial standards in the digital age.
Keywords: Deepfake, Artificial Intelligence, Challenges, Evidentiary Issues, Digital Evidence, Cybercrime, BSA
- INTRODUCTION
Recent deepfake-related cybercrimes in India, such as the 12 cases reported in Karnataka across two years, highlight the mounting dangers posed by hyper-realistic synthetic media.
1Deepfakes leverage artificial intelligence to fabricate convincing visual and auditory content, threatening the authenticity of evidence and the efficacy of the justice system. Their spread has an impact on everything from gendered online violence to electoral manipulation. The paper examines the evidential, legal, and policy issues brought on by deepfakes in the context of India’s swift digital development, with an emphasis on immediate reforms.
Combining the terms “deep learning” with “fake,” the term “Deepfakes” refers to the processing of genuine photographs. Without permission, voices and facial expressions might frequently change. They are driven by cutting-edge machine learning, deep learning, and artificial intelligence, if you’re curious about how deep it goes. There are valid fears that these tools may be misused.2 There are legitimate concerns that these instruments could be abused. The possibility of these instruments being abused is a real worry. Deep pictures are produced by digitally processing various media, such as audio, video, and image data, using artificial intelligence.3 By definition, digitally controlled media can erode public confidence in institutions, harm people’s reputations, and turn out to be inaccurate. Furthermore, the combination of deepfake technology and existing legal frameworks, such as those pertaining to intellectual property, privacy, and defamation, raises new questions about jurisdiction, responsibility, and the degree of protection afforded to people and companies.4 There are currently no particular rules in India, despite the government’s acknowledgement that they are being developed; the risks of the present necessitate a more resilient and flexible legal system.5.
India’s legal framework for handling deepfakes is still developing; although current laws on defamation, information technology, and data protection may be relevant, they are not established expressly to handle the difficulties presented by this technology. India lacks a distinct legal framework for deepfakes, which causes uncertainty and potential protection gaps, necessitating a more comprehensive and advanced approach. Furthermore, it is accurate to say that India now lacks specific legislation to address deepfakes and crimes utilising artificial intelligence.6 Although the government has acknowledged the need for regulation, the legal framework is still being constructed.
According to a narrow definition, deepfakes, which are derived from the words “deep learning” and “fake”—are produced by methods that overlay a target person’s face onto a source person’s video to create a video in which the target person is seen talking or doing things that the source person does. This constitutes a category of deepfakes, namely face swap. In a broader definition, deepfakes are artificial intelligence-synthesised content that can also fall into two other categories, i.e., lip-sync and puppet-master. Lip-sync Videos that have had their lip movements altered to match an audio recording are known as “deepfakes.” In puppet master deepfakes, one person sits in front of the camera while another person’s animated face, eyes, and head move in sync with the puppet’s.”7 While scholarly and policy discussions have focused on cyber regulation and digital ethics, little attention has been given to the evidentiary challenges deepfakes pose in criminal adjudication, which this paper aims to address.
RESEARCH METHODOLOGY
The current study uses both theoretical and analytical methodologies to conduct a thorough evaluation of the evidentiary issues offered by deepfakes in India and around the world. The study focuses on data collection from scholarly journal papers, articles, and reports gathered from a variety of credible sources. Only papers and reports subjected to peer review or published by credible organisations and government agencies have been selected for analysis.
- UNDERSTANDING DEEPFAKES AND THEIR RISKS
A type of synthetic media known as “deepfakes” is produced by sophisticated artificial intelligence (AI) and deep learning techniques like Generative Adversarial Networks (GANs). Through training on extensive datasets, these systems can effectively alter or substitute visual or aural input, producing audio or video that seems real but is actually artificial.
Another, less sophisticated but similar manipulation method is “shallow fakes,” which involves deceptively edited media using basic tools rather than AI. Both types present serious challenges to distinguishing reality from fiction as editing tools become more accessible, even laypersons can create compelling fake evidence.8
The risks which occur due to deepfakes are like spreading false information which would create confusion about various issues as in deepfake videos of celebrities or politicians can be used to influence the opinion of the public. Another risk is harassment and defamation of a person by using deepfake technology to use for unethical actions, for creating revenge porn.
- WHY DEEPFAKES MATTER IN INDIA
3.1. Democratic Harm
Deepfakes threaten to undermine democratic integrity, particularly during elections. Fake videos deploying synthetic speech or actions of public figures can mislead voters, fuel communal tensions, and distort public discourse. Indian authorities have already reported the circulation of such content ahead of state elections in 2025, prompting rapid intervention by cybercrime coordination agencies.
3.2. Gendered Abuse
A substantial proportion of deepfake complaints submitted on India’s cybercrime reporting platforms concern non-consensual pornographic content targeting women. Cyberbullying, sextortion, and gender-based violence are made possible by this weaponisation of technology, which is further exacerbated by the unauthorised use of personal biometric information.
3.3. Platform Accountability
To preserve statutory immunity, digital intermediaries and social media platforms in India are subject to stringent due diligence obligations. This entails the timely elimination of damaging content as well as the proactive use of watermarking and AI-based detection techniques.
3.4. Security, Economic, and Cultural Risks
Deepfakes can endanger public order and national security when they are used improperly for fraud, impersonation, or propaganda. Voice-based deepfakes have already been linked to financial schemes that target both people and organisations. Manipulated content also runs the risk of escalating intercommunal conflicts and eroding social cohesiveness in a multicultural, multilingual nation like India.
- RECENT INDIAN CYBERCRIME CASES INVOLVING DEEPFAKES 4.1. Ranveer Singh Deepfake: Ranveer Singh, an Indian actor, has filed a lawsuit about a deepfake video that went viral and purportedly showed him endorsing a political party. During his most recent trip to Varanasi, he was interviewed by the news agency ANI, and the video is authentic. But an AI-powered system was used to create the audio. Ranveer Singh was heard attacking Prime Minister Narendra Modi’s strategy for dealing with unemployment and inflation in the deepfake video. The final touches on the produced video were messages pushing viewers to vote for Congress. Ranveer Singh’s legal team has acknowledged that an investigation into the issue was initiated after a First Information Report (FIR) was filed.9 After the deepfake became viral, he cautioned his Instagram followers, “Deepfake se bacho doston (Friends, beware of deepfakes)”
4.2. Deepfake financial fraud involving $25.6 million and an AI-generated video call scam: Following a deepfake video conference with the company’s CFO and other staff members, a finance employee at the 78-year-old London-based architecture and design business Arup approved a $25.6 million contract earlier in 2024. When the CFO sent him an email stating that a covert transaction was required, the employee became suspicious. Nevertheless, the team’s video chat convinced him to set aside his misgivings and submit the money. After checking with the company’s headquarters, the worker discovered it was a fraud.10
4.3. The first deepfake fraud case in Kerala – It was recorded in July 2022 when Radhakrishnan, a 73-year-old man, lost Rs 40,000 as a result of falling for the scam. The victim received a WhatsApp call from Venu Kumar, a former coworker. The caller impersonated Venu Kumar using deepfake technology and demanded 40,000 rupees for an urgent matter. He gave the money without hesitation, believing it was his colleague. After realising he had been scammed, he made a complaint to the police station. The money was traced by the police to a Maharashtra account. The Kerala Police also cautioned people to be wary of unexpected calls and messages and warned about the possibility of deepfake fraud.11
- LEGAL AND CONSTITUTIONAL FRAMEWORK
5.1. Constitutional Provisions
The Indian Constitution offers foundational rights that frame both the promise and the challenges of regulating deepfakes.
The Indian Constitution’s Article 19(1)(a) protects the right to free speech and expression, but Article 19(2) allows for reasonable limitations in the service of national security, morality, and public order.12 In K.S. Puttaswamy v. Union of India, the Supreme Court upheld the fundamental right to informational privacy under Article 21 and placed an affirmative duty on the State to safeguard citizens against the nefarious use of digital data. India’s constitutional commitment to dignity and non-discrimination (Articles 14 and 21) also requires that regulatory frameworks address the disproportionate harm deepfakes inflict on marginalised groups, particularly women, in cases of non-consensual imagery.
5.2. Statutory Provisions
Information Technology (IT) Act, 200013
- Section 66C and 66D: Penalise identity theft and cheating by personation, including through fake digital content.
- Section 66E: Criminalises non-consensual transmission of private images.
- Sections 67, 67A, 67B: Penalise publishing or transmission of obscene material, sexually explicit content, and child pornography.
Bharatiya Nyaya Sanhita (BNS), 202314
- Section 77: Addresses the capture and transmission of private images of women. • Section 351: Penalises criminal intimidation, including by digital means.
- Section 356: Covers defamation, which can include deepfake-related harm.
Digital Personal Data Protection (DPDP) Act, 202315
This act controls how digital personal data is processed, creates rights and responsibilities for “data fiduciaries” and “data principals,” and enhances remedies for data misuse linked to the production or dissemination of deepfakes.
- JUDICIAL PRECEDENTS
Deepfake-related cases have started to appear in the Indian judiciary; however, the body of jurisprudence is still quite small because the technology is still in its infancy. Several major cases have started to define the legal understanding of deepfake concerns and how they overlap with existing legal standards.
6.1. Shreya Singhal v. Union of India (2015)16
Although it has nothing to do with deepfakes specifically, the Supreme Court’s ruling in Shreya Singhal v. Union of India established significant guidelines for online communication and the harmony between free speech and justifiable limitations. Section 66A of the IT Act was declared unconstitutional by the court, which emphasised that limitations on online communication must adhere to the same guidelines as those governing offline speech. Understanding how courts may interpret speech prohibitions linked to deepfakes depends on this precedent. This judgment invalidated Section 66A of the IT Act for imposing ambiguous speech limits, upholding the principle that digital expression cannot be restricted unless it directly relates to one of the grounds specified in Article 19(2).
6.2. Justice K.S. Puttaswamy (Retd.) v. Union of India (2017)17
The Indian Constitution’s essential right to privacy was acknowledged by the historic privacy ruling. This verdict has significant implications for deepfake cases, particularly those involving the illegal use of private images and videos. This case required government action to prevent the misuse of digital content because it acknowledged informational privacy as an essential component of the right to life and liberty.
- EVIDENTIARY CHALLENGES OF DEEPFAKES IN INDIA
7.1. Verification of Audio-Video Evidence by Courts
The Indian Evidence Act, 1872, has been superseded by the Bharatiya Sakshya Adhiniyam (BSA), 2023, which regulates the authenticity and admission of electronic documents in India. According to Section 63 of the BSA, electronic evidence must be admitted with a valid certificate attesting to its authenticity and provenance before the court would consider it.18 The Supreme Court in Arjun Panditrao Khotkar v. Kailash Kushanrao Gorantyal reaffirmed that such certification is mandatory for ensuring the reliability of digital evidence.
However, these provisions were primarily designed to address conventional electronic materials such as CCTV footage, call recordings, or emails and not AI-manipulated content like deepfakes. Deepfakes are capable of producing incredibly lifelike but fake audio or video clips that show people doing or saying things they never did. This makes it exceedingly difficult for courts to determine whether an audio-video record reflects genuine reality or a synthetically generated falsification. The current evidentiary framework under the BSA, though modernised, does not yet incorporate specialised mechanisms to authenticate or evaluate AI-altered media, leaving significant gaps in evidentiary assessment.
7.2. Adequacy of Forensic Capabilities in India
Detecting deepfakes demands advanced forensic technology capable of analysing frame-level inconsistencies, metadata, and audio-visual distortions. In India, however, digital forensic infrastructure and expertise remain inadequate to meet the complexity of AI-generated evidence. Although central and state forensic laboratories analyse digital material, there are no uniform national protocols or certified methodologies to identify deepfake manipulation.
Deepfakes produced through advanced Generative Adversarial Networks (GANs) are often so sophisticated that even expert analysis struggles to distinguish them from authentic content. The legitimacy and admissibility of digital evidence in criminal prosecutions are compromised by this technological disparity as well as the restricted judicial acknowledgement of detection techniques. As a result, deepfakes have the ability to skew investigations and court cases, leading to abuses of evidence and unfair trials.
7.3. Growing Misuse of Deepfakes in Indian Cybercrime
Deepfake-related offences, especially those involving online harassment, impersonation, and extortion, have alarmingly increased in India. Law enforcement organisations frequently report instances of financial fraud utilising cloned voices or artificial graphics, modified political information, and intimate photos produced by AI.
While the Information Technology Act, 2000, along with provisions under the Bharatiya Nyaya Sanhita (BNS), 2023, criminalise certain acts like identity theft (Section 66C of the IT Act), privacy violations (Section 66E), and obscene content transmission (Sections 67, 67A), these statutes were not conceived with synthetic media in mind. Deepfake-related acts such as voice cloning, facial morphing, or AI-fabricated videos often do not fit neatly within the existing legal categories. The absence of explicit legal definitions or investigative procedures for deepfakes complicates their admissibility, authentication, and evidentiary evaluation in Indian criminal trials.
- STRUGGLES WITH DEEPFAKE EVIDENCE IN INDIAN TRIALS
Despite the presence of stronger legislation on paper, India’s trial courts are encountering practical obstacles when confronted with deepfake evidence. The challenge is not only one of legal interpretation but also of technological limitation.
Most courts rely on forensic laboratories operated by state police departments for the examination of digital evidence.19 However, advanced deepfake detection technologies capable of keeping pace with evolving generative models are absent in most facilities, particularly outside major metropolitan areas. Judges may therefore be compelled to decide cases based on audio-visual evidence that cannot be credibly authenticated.
Compounding this, lawyers on both sides often lack the technical expertise to effectively argue the authenticity or falsity of digital records. This creates space for both false positives and false negatives: genuine videos may be dismissed as deepfakes, while synthetic media may be accepted as authentic.
Such uncertainty gives rise to the so-called “deepfake defence” where accused persons claim that legitimate evidence against them has been fabricated. Conversely, convincing deepfakes can lead to the wrongful implication of innocent individuals before the truth emerges, if at all.
In high-profile cases, the risks are amplified. A fabricated witness video or doctored confession can irreparably damage reputations and influence public opinion, media narratives, and even judicial deliberation. When perception begins to outweigh proof, the trial ceases to revolve around what truly happened and instead reflects what appears to have happened, precisely the ambiguity that deepfakes exploit.
- RECOMMENDATIONS AND WAY FORWARD
The recent case in which actor Sunil Shetty was granted urgent interim protection by the Bombay High Court against the circulation of AI-generated deepfake content underscores the urgent need for robust legal safeguards.20 India must establish criminal penalties for any individual or organisation that creates or disseminates deepfakes with the intent or effect of deceiving or manipulating others. Offences that influence elections or judicial outcomes should be treated as aggravated circumstances, while provisions for good faith exceptions (such as satire or research) can be recognised. Furthermore, developers and providers of generative AI systems should be legally obligated to implement technical safeguards preventing the misuse of their systems to create harmful deepfakes and held liable for negligence or non-compliance. Strengthening the Bharatiya Sakshya Adhiniyam, 2023, upgrading forensic capacity, and training judicial officers remain essential to preserving evidentiary integrity in the age of AI driven deception.
CONCLUSION
Deepfakes represent more than a technical curiosity as they threaten the very foundation of criminal adjudication in India. The evidentiary tools we rely on, such as authentication, certificates, and forensic reports, are being stretched to their breaking point by synthetic media that evade detection, blur provenance, and mislead truth. While the Bharatiya Sakshya Adhiniyam, 2023 offers a framework for electronic evidence, it does not yet anticipate the full range of harms wrought by AI-altered content nor equip courts with the tools needed for
verification.21 True reform demands not only new laws and penalties but also investment in forensics, rigorous protocols, judicial education, and ethical accountability from those building generative systems. If India fails to adapt, the courtroom may become a theatre of illusion rather than a forum of fact.
REFERENCE(S):
- Darshan Devaiah B.P., Karnataka reports 12 deepfake-related cybercrime cases in two years, Hindu Times, March 19, (2025)
- Biranchi Naryan P. Panda, Isha Sharma, Deepfake Technology in India and World: Foreboding and Forbidding, July 16, (2025)
- Artificial Intelligence (AI) Policies in India- A Status Paper, August (2020) 4. Qureshi, S. M., Saeed, A., Almotiri, S. H., Ahmad, F., & Ghamdi, M. A. A. (2024). Deepfake forensics: a survey of digital forensic methods for multimodal deepfake identification on social media. PeerJ Computer Science, 10
- Nguyen, T. T., Nguyen, C. M., Nguyen, D. T., Nguyen, D. T., & Nahavandi, S. (2019). Deep Learning for Deepfakes Creation and Detection. arXiv (Cornell University) 6. Bharati, R. (2024). Navigating the Legal Landscape of Artificial Intelligence: Emerging Challenges and Regulatory Framework in India. SSRN Electronic Journal. 7. Shruti Agarwal, Hany Farid, Yuming Gu, Mingming He, Koki Nagano, and Hao Li. Protecting world leaders against deep fakes. In Computer Vision and Pattern Recognition Workshops, volume 1, pages 38–45, (2019)
- Neill Jacobson, Deepfakes and Their Impact on Society, Openfox, February 26, (2024) 9. Das, S. (2024). Video Of Ranveer Singh Criticising PM Modi Is A Deepfake AI Voice Clone 10. Prarthana Prakash, A deepfake ‘CFO’ tricked the British design firm behind the Sydney Opera House in $25 million scam, Fortune, May 17, (2024).
- Squad, I. C. (2023, November 27). Case Study: Kerala’s first deepfake fraud. Indian Cyber Squad.
- India Constitution, Article, 19, cl, 1(a), 2
- Information Technology Act, 2000
- Bharatiya Nyaya Sanhita, 2023
- Digital Personal Data Protection, 2023
- Shreya Singhal v. Union of India (2015) AIR 2015 SUPREME COURT 1523
- Justice K.S. Puttaswamy (Retd.) v. Union of India (2017) AIR 2017 SUPREME COURT 4161
- Laws of India. (2023, September 2). Electronic evidence in the Indian Evidence Act: Navigating the digital frontier.
- Indian Cyber Crime Coordination Centre (I4C), Ministry of Home Affairs, Government of India, Annual Report 2022
- TOI Entertainment Desk, Suniel Shetty granted interim protection by Bombay High Court, etimes.in, Oct 11, (2025)
- Jain, A. (2025). Deepfakes and Misinformation: Legal Remedies and Legislative Gaps. Indian Journal of Law, 3(2), 23–28.
1 Darshan Devaiah B.P., Karnataka reports 12 deepfake-related cybercrime cases in two years, Hindu Times, March 19, (2025)
2Biranchi Naryan P. Panda, Isha Sharma, Deepfake Technology in India and World: Foreboding and Forbidding, July 16, (2025)
3Artificial Intelligence (AI) Policies in India- A Status Paper, August (2020)
4 Qureshi, S. M., Saeed, A., Almotiri, S. H., Ahmad, F., & Ghamdi, M. A. A. (2024). Deepfake forensics: a survey of digital forensic methods for multimodal deepfake identification on social media. PeerJ Computer Science, 10. https://doi.org/10.7717/peerj-cs.2037
5Nguyen, T. T., Nguyen, C. M., Nguyen, D. T., Nguyen, D. T., & Nahavandi, S. (2019). Deep Learning for Deepfakes Creation and Detection. arXiv (Cornell University).http://arxiv.org/pdf/1909.11573.pdf
6 Bharati, R. (2024). Navigating the Legal Landscape of Artificial Intelligence: Emerging Challenges and Regulatory Framework in India. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.4898536
7 Shruti Agarwal, Hany Farid, Yuming Gu, Mingming He, Koki Nagano, and Hao Li. Protecting world leaders against deep fakes. In Computer Vision and Pattern Recognition Workshops, volume 1, pages 38–45, (2019)
8 Neill Jacobson, Deepfakes and Their Impact on Society, Openfox, February 26, (2024), https://www.openfox.com/deepfakes-and-their-impact-on-society/
9 Das, S. (2024). Video Of Ranveer Singh Criticising PM Modi Is A Deepfake AI Voice Clone. https://www.boomlive.in/fact-check/viral-video-bollywood-actor-ranveer-singh-congress-campaign-lok-sabha elections-claim-social-media-24940
10 Prarthana Prakash, A deepfake ‘CFO’ tricked the British design firm behind the Sydney Opera House in $25 million scam, Fortune, May 17, (2024). https://fortune.com/europe/2024/05/17/arup-deepfake-fraud-scam victim-hong-kong-25-million-cfo/
11 Squad, I. C. (2023, November 27). Case Study: Kerala’s first deepfake fraud. Indian Cyber Squad. https://www.indiancybersquad.org/post/case-study-kerala-s-first-deepfake-fraud
12 India Constitution, Article, 19, cl, 1(a), 2
13 Information Technology Act, 2000
14 Bharatiya Nyaya Sanhita, 2023
15 Digital Personal Data Protection, 2023
16 Shreya Singhal v. Union of India (2015) AIR 2015 SUPREME COURT 1523
17 Justice K.S. Puttaswamy (Retd.) v. Union of India (2017) AIR 2017 SUPREME COURT 4161
18 Laws of India. (2023, September 2). Electronic evidence in the Indian Evidence Act: Navigating the digital frontier. https://lawsofindia.com/2023/09/02/electronic-evidence-in-the-indian-evidence-act-navigating-the digital-frontier.
19 Indian Cyber Crime Coordination Centre (I4C), Ministry of Home Affairs, Government of India, Annual Report 2022
20 TOI Entertainment Desk, Suniel Shetty granted interim protection by Bombay High Court, etimes.in, Oct 11, (2025)
21 Jain, A. (2025). Deepfakes and Misinformation: Legal Remedies and Legislative Gaps. Indian Journal of Law, 3(2), 23–28.





