Authored By: Aryan Lal
NIMS
Abstract
The swift progression of artificial intelligence has profoundly reshaped the manner in which digital content is created, manipulated, and disseminated. One of the most contentious manifestations of this technological advancement is deepfake technology, which facilitates the production of highly realistic yet artificially fabricated audio-visual content. Although deepfakes have legitimate applications in areas such as film production, educational tools, and accessibility solutions, their malicious exploitation has emerged as a grave legal and societal issue. Increasingly, deepfake technology has been employed in perpetrating cyber fraud, spreading political disinformation, committing identity impersonation, damaging individual reputations, and facilitating non-consensual sexual exploitation. In the Indian context, where internet penetration and social media usage have expanded at an unprecedented pace, the scale and severity of harm arising from the misuse of deepfakes are particularly pronounced. ¹
This article undertakes a critical evaluation of whether India’s existing criminal law framework is sufficiently equipped to address offences enabled by deepfake technology. It analyses the applicability and effectiveness of relevant provisions under the Information Technology Act, 2000, the Indian Penal Code, 1860, constitutional safeguards, evolving judicial interpretations, and recent advisories issued by the Ministry of Electronics and Information Technology. The study identifies significant legislative ambiguities and enforcement shortcomings that undermine effective regulation and prosecution of deepfake-related offences. Further, it adopts a comparative approach by examining regulatory responses in select foreign jurisdictions to draw normative insights. The article concludes by proposing targeted legislative amendments and policy reforms aimed at strengthening India’s institutional and legal preparedness to combat crimes facilitated by deepfake technologies. ²
1 Introduction
Artificial intelligence has emerged as a defining force of the contemporary digital landscape, significantly influencing governance structures, modes of communication, commercial activity, and entertainment industries. Among the most disruptive manifestations of artificial intelligence is deepfake technology, which employs sophisticated machine learning methods—most notably Generative Adversarial Networks (GANs)—to generate synthetic audio, visual, or image-based content that closely imitates real individuals. Unlike conventional techniques of image or video alteration, deepfakes possess the capacity to produce media that is exceedingly difficult to distinguish from genuine content, thereby substantially increasing the potential for deception and misuse.3
Although deepfake technology has legitimate and beneficial applications, including in filmmaking, virtual environments, and assistive and accessibility technologies, its malicious deployment has evolved into a significant threat to individual dignity and societal trust. Deepfakes have been used to fabricate speeches and actions of political figures, distort public opinion during electoral processes, create non-consensual sexually explicit material, and perpetrate complex financial fraud through voice cloning technologies. Such practices not only compromise democratic institutions and processes but also infringe upon personal autonomy and undermine confidence in the reliability of digital evidence.
India’s exposure to the harms associated with deepfake technology is exacerbated by its rapidly expanding digital ecosystem. With a vast and growing population of internet users and an increasing dependence on social media platforms for information dissemination and communication, manipulated content can spread with extraordinary speed and reach.4Despite these challenges, India’s criminal law framework has not been specifically adapted to address offences arising from deepfake technology.5 The absence of a comprehensive and dedicated legal regime raises pressing concerns regarding the sufficiency of existing statutory provisions, enforcement mechanisms, and constitutional protections. This article contends that India’s current legal response to deepfake-related harms remains fragmented and largely reactive, underscoring the urgent need for coherent legislative and policy reforms.
2 Research Methodology
This article employs a doctrinal and analytical research methodology to examine the legal challenges posed by deepfake technology. The study relies on primary sources such as statutory provisions contained in the Information Technology Act, 2000 and the Indian Penal Code, 1860, relevant constitutional provisions, judicial pronouncements of Indian courts, and official government notifications and advisories. Secondary sources include peer-reviewed academic journals, policy-oriented studies, institutional reports, and comparative legal materials drawn from foreign jurisdictions.6 Through a critical assessment of these materials, the research evaluates the adequacy and effectiveness of the existing legal framework and advances reform-oriented recommendations grounded in constitutional values and broader policy considerations.
3 Understanding Deepfake Technology and Its Misuse
Deepfake technology denotes the application of artificial intelligence–driven algorithms to generate or alter digital media in a manner that misrepresents reality. Through the training of machine learning models on extensive datasets comprising images, videos, and audio recordings, deepfake systems are capable of reproducing facial expressions, vocal characteristics, and bodily movements with a high degree of precision. This advanced level of technological capability differentiates deepfakes from earlier forms of digital manipulation and gives rise to complex and unprecedented regulatory and legal challenges.
The malicious use of deepfake technology may be broadly categorized into multiple forms. Political deepfakes involve the artificial fabrication of statements or conduct attributed to public officials or political actors, with the potential to manipulate electoral processes and shape public perception.7 Pornographic deepfakes, which disproportionately affect women, entail the non-consensual incorporation of an individual’s likeness into sexually explicit material, resulting in profound psychological trauma, social stigma, and reputational damage. Financial deepfakes represent an emerging category of cybercrime, wherein AI-generated voice cloning is employed to impersonate corporate executives or trusted individuals in order to fraudulently induce monetary transfers. In addition, deepfake technology facilitates a range of related offences, including identity theft, extortion, and defamation. The breadth and variety of these applications underscore the multifaceted and evolving threat posed by deepfake technology.8
4.1 Existing Legal Framework in India
India does not currently have legislation specifically addressing deepfake technology. Instead, law enforcement agencies rely on a combination of provisions under the Information Technology Act, 2000 and the Indian Penal Code, 1860.
4.1 Information Technology Act, 2000
The Information Technology Act, 2000 constitutes India’s principal statutory framework for regulating cyber-related offences. Section 66D of the Act penalises the offence of cheating by personation through the use of computer resources and may be invoked in instances involving fraud facilitated by deepfake technology.9 Additionally, Sections 67 and 67A criminalise the electronic publication or transmission of obscene and sexually explicit material, thereby providing a measure of legal recourse against pornographic deepfake content. Nevertheless, these provisions were formulated prior to the advent of advanced artificial intelligence technologies and consequently fail to account for the distinctive characteristics of deepfakes, including their automated creation, realism, and capacity for rapid and large-scale dissemination.10
4.2 Indian Penal Code, 1860
The Indian Penal Code, 1860 addresses certain dimensions of deepfake-related misconduct through provisions concerning defamation, cheating, identity-based offences, and voyeurism. Section 499 criminalises defamation, while Section 354C specifically penalises acts of voyeurism, including the creation or dissemination of non-consensual images. Although these provisions may be invoked in cases involving the misuse of deepfake technology, they are constrained by significant conceptual and operational limitations, particularly with respect to proving criminal intent, attributing authorship, and addressing issues of cross-border jurisdiction.11Consequently, the existing criminal law framework remains fragmented and inadequately equipped to comprehensively address offences arising from deepfake technologies.
5. Constitutional Implications
The malicious use of deepfake technology directly engages fundamental constitutional rights, most notably the right to privacy guaranteed under Article 21 of the Constitution of India. In Justice K.S. Puttaswamy v. Union of India, the Supreme Court affirmed that informational privacy constitutes an essential facet of the right to life and personal liberty.12 The unauthorised alteration or fabrication of an individual’s image, likeness, or voice through deepfake technology therefore amounts to a substantial infringement of this constitutionally protected right.
Concurrently, any regulatory response to deepfake content must be carefully calibrated to safeguard the freedom of speech and expression enshrined under Article 19(1)(a) of the Constitution. Restrictions imposed on deepfake-related content are required to conform to the standard of reasonableness prescribed under Article 19(2). This necessitates a delicate constitutional balancing exercise, as excessively broad or vague regulation risks suppressing legitimate expression, whereas insufficient regulation may prove ineffective in preventing serious and irreparable harm.13
6. Judicial Trends and Case Law
To date, Indian courts have not rendered a judicial decision that directly addresses the legal implications of deepfake technology. Nevertheless, judicial precedents in analogous domains offer significant interpretative guidance. In Shreya Singhal v. Union of India, the Supreme Court invalidated Section 66A of the Information Technology Act on the ground of vagueness, underscoring the necessity for clarity and precision in legislative measures regulating online speech.14 This ruling illustrates the constitutional vulnerabilities inherent in ambiguous or overly expansive regulatory frameworks governing digital content.
Furthermore, Indian courts have acknowledged the grave psychological, emotional, and social harm inflicted by the creation and dissemination of non-consensual intimate imagery. Judicial recognition of such harm is directly applicable to cases involving pornographic deepfakes and reinforces the imperative for comprehensive and effective legal safeguards to protect individual dignity and autonomy 15 .
7. Government Initiatives and MeitY Advisories
The Government of India has increasingly recognised the threats associated with the misuse of deepfake technology. In December 2023, the Ministry of Electronics and Information Technology issued a series of advisories to social media intermediaries, underscoring the obligation to comply with due diligence requirements prescribed under the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021.1⁶ These advisories direct online platforms to expeditiously detect and remove deceptive synthetic content and to reinforce their grievance redressal frameworks.
Although these measures constitute a constructive policy response, they remain advisory in character and do not possess the binding authority of substantive criminal legislation. As a result, their capacity to effectively deter the creation and dissemination of deepfake content remains constrained.1⁷
8. Comparative Perspective
Several foreign jurisdictions have undertaken specific legislative and regulatory measures to address the challenges posed by deepfake technology. In the United States, a number of states, including California and Texas, have enacted targeted laws aimed at curbing the malicious use of deepfakes, particularly in the electoral context, in order to safeguard democratic processes from manipulation. The European Union, through its proposed Artificial Intelligence Act, has adopted a risk-based regulatory framework that categorises certain applications of deepfake technology as high-risk and imposes transparency and disclosure obligations on developers and deployers of such systems.1⁸
In contrast to these proactive developments, India’s regulatory response to deepfake technology remains disjointed and predominantly reactive. The comparative experience of foreign jurisdictions illustrates both the practicality and the pressing need for a dedicated legislative framework to effectively address the unique risks associated with deepfake technologies.1⁹
9. Suggestions and Way Forward
India must adopt a forward-looking and comprehensive regulatory strategy to effectively address the challenges posed by deepfake technology. First, the enactment of dedicated legislation is essential to clearly define offences involving deepfake and other synthetic media technologies and to prescribe proportionate and deterrent penalties.20 Second, the Information Technology Act, 2000 should be suitably amended to explicitly regulate the creation, dissemination, and misuse of artificial intelligence–generated synthetic content. Third, sustained capacity-building measures are necessary to enhance the technical competence of law enforcement and investigative agencies, particularly in the areas of digital forensics and artificial intelligence.21
In addition, social media intermediaries should be subject to strengthened accountability frameworks, including mandatory labelling of synthetic content and expedited takedown obligations for harmful deepfakes.22 Public awareness and digital literacy initiatives are equally critical to inform users about the risks associated with deepfake technology and the legal remedies available to affected individuals.
10. Conclusion
Deepfake technology poses a substantial challenge to India’s criminal justice system in the contemporary digital environment. Although the existing legal framework provides certain remedial measures, it remains inadequate to effectively respond to the magnitude, complexity, and technological sophistication of harms facilitated by deepfakes. There is a pressing need for a progressive and coherent legal framework that is firmly rooted in constitutional principles and responsive to evolving technological developments, in order to protect individual rights, preserve democratic institutions, and maintain public confidence. Ultimately, India’s readiness to confront deepfake-related threats will hinge on its capacity to convert policy acknowledgment into concrete and effective legislative measures.
Reference(S)
1. Information Technology Act, 2000, § 66D (India).
2. Information Technology Act, 2000, §§ 67, 67A (India).
3. Indian Penal Code, 1860, §§ 499, 354C (India).
4. Justice K.S. Puttaswamy (Retd.) v. Union of India, (2017) 10 SCC 1 (Supreme Court of India).
5. Shreya Singhal v. Union of India, (2015) 5 SCC 1 (Supreme Court of India).
6. Ministry of Electronics and Information Technology (MeitY), Government of India, Advisory to Social Media Intermediaries Regarding Deepfake Content and Misinformation, December 2023; Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021.
7. Danielle Keats Citron & Robert Chesney, Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security, 107 California Law Review 1753 (2019).
8. Ian J. Goodfellow et al., Generative Adversarial Networks, 63 Communications of the ACM 139 (2014).
9. OECD, Synthetic Media and Trust in the Digital Age (2021).
10. European Commission, Proposal for a Regulation Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act), COM (2021) 206 final.
11. Clare McGlynn, Erika Rackley & Nicholas Suzor, Beyond “Revenge Porn”: The Continuum of Image-Based Sexual Abuse, 25 Feminist Legal Studies 25 (2017
1 Ministry of Electronics and Information Technology (MeitY), Government of India, Advisory on Deepfake Content and Misinformation (2023); OECD, Synthetic Media and Trust in the Digital Age (2021).
2 Ministryof Electronics and Information Technology (MeitY), Government of India, Advisory on Deepfake Contentand Misinformation (2023); OECD, Synthetic Media and Trust in the Digital Age (2021).
3Ian J. Goodfellow et al., “Generative Adversarial Nets,” Advances in Neural Information Processing Systems (2014); OECD, Synthetic Media and Trust in the Digital Age (2021)
4Internet and Mobile Association of India, Digital India Report; Ministry of Electronics and Information Technology (MeitY), Government of India, Digital India Statistics.
5Information Technology Act, 2000; Indian Penal Code, 1860
6OECD, Synthetic Media and Trust in the Digital Age (2021); Danielle Citron & Robert Chesney, “Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security,” California Law Review (2019).
7Danielle Citron & Robert Chesney, “Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security,” California Law Review Vol. 107 (2019).
8Council of Europe, Artificial Intelligence and Criminal Law (2020).
9Information Technology Act, 2000, § 66D.
10Law Commission of India, Report on Cyber Crimes and Emerging Technologies; Ministry of Electronics and Information Technology (MeitY), Advisory on Deepfake Content (2023).
11Law Commission of India, Report on Cyber Crimes; Europol, Facing Reality? Law Enforcement and the Challenge of Deepfakes (2022)
12Justice K.S. Puttaswamy (Retd.) v. Union of India, (2017) 10 SCC 1
13Constitution of India, arts. 19(1)(a), 19(2), 21; Gautam Bhatia, Offend, Shock, or Disturb: Free Speech under the Indian Constitution (2016).
14 Shreya Singhal v. Union of India, (2015) 5 SCC 1
15 State of West Bengal v. Animesh Boxi, (2018) SCC OnLine Cal 6372; Law Commission of India, Reports on Cyber Crimes and Image-Based Abuse
16Ministry of Electronics and Information Technology (MeitY), Government of India, Advisory to Social Media Intermediaries on Deepfake Content (Dec. 2023); Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021.
17 OECD, Synthetic Media and Trust in the Digital Age (2021); Law Commission of India, Reports on Emerging Technologies and Cyber Regulation.
18 European Commission, Proposal for a Regulation Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) (2021).
19 OECD, Synthetic Media and Trust in the Digital Age (2021); Council of Europe, Artificial Intelligence and Human Rights (2020)
20Law Commission of India, Reports on Emerging Technologies and Criminal Law Reform.
21United Nations Office on Drugs and Crime (UNODC), Cybercrime and Electronic Evidence (2020).
22Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021; OECD, Synthetic Media and Trust in the Digital Age (2021).





