Authored By: Dipika Sharma
Bundelakhand University
Abstract
The use of AI-generated deepfakes presents novel threats to personal dignity, public discourse and national security in India. This paper considers the current state of Indian law’s approach towards deepfakes, essentially reviewing how the Information Technology Act, 2000, penal law (IPC 1860, now BNS 2023) and constitutional rights address this issue. It then surveys recent judicial responses and cases, including various injunctions passed by the Delhi HC for celebrity deepfakes [1][2], and evaluates the gaps in the current framework. While existing provisions like ITA §§66C–66E, 67 (which criminalise identity theft, cheating using a PC, and publishing/transmitting obscene material in electronic form) and IPC Sections on defamation and fraud provide partial cover, there is a lack of specifically targeted measures within these provisions [3][4]. Besides the usual evidentiary difficulties, other critical issues are those of cross-border enforcement and achieving a balance between free speech and privacy [5][6]. The paper further reviews new developments occurring at the national level (IT Rules 2025 amendments, Data Protection Act 2023, BNS 2023) as well as at the international level. The conclusion lists recommendations, such as a specific “synthetic media” law, more serious platform duties and victim remedies for the protection of rights with minimal chilling of innovation [7].
Introduction
Recent incidents highlight the urgency of legal reform. Hyper-realistic deepfake videos have swarmed social media worldwide including India. Notably, a deepfake of a Bollywood actress (Rashmika Mandanna) went viral in 2023 and doctored election campaign clips emerged during 2024 polls [8][9]. In late 2025, prominent celebrities (e.g., Salman Khan, Ajay Devgn, Aishwarya Rai) moved the Delhi High Court to restrain unauthorized AI-generated uses of their likeness [1][2]. These events emphasize that deepfakes can enable defamation, financial fraud or nonconsensual pornography, destroying individual dignity and eroding public trust [10][6].
Below, an analysis of India’s legal response to deepfakes is undertaken. The constitutional and statutory framework is outlined in Part A, following which Part B undertakes a survey of judicial pronouncements; Part C critically appraises the gaps and challenges before turning to recent legislative and regulatory developments in Part D. A comparison with global approaches in dealing with the challenges posed by AI-generated synthetic media will also be carried out. The research methodology adopted herein is essentially doctrinal/analytical in that we have examined the key statutes, case law and relevant policy documents and derived from scholarly and official sources interpretations of their application to AI-generated synthetic media [10][11]. Comparative insights are also gleaned from experiences elsewhere, particularly in the US, the EU and China, to inform the way ahead for India.
Research Methodology
The study adopts a doctrinal legal approach. Primary sources comprise legislation—including the Information Technology Act 2000, Bharatiya Nyaya Sanhita 2023, Digital Personal Data Protection Act 2023, and Constitution of India—along with reported cases. Recent amendments and rules, such as the Intermediary Guidelines 2021 and its amendments in 2025, are analyzed. Secondary literature reviewed includes legal journal articles, government reports, and credible news. Empirical data, for example on cybercrime statistics, is consulted where available. International legal developments have been surveyed from secondary sources to provide a comparative perspective [12][13]. In general, the research is analytical and descriptive in nature, as its focus is to identify the legal contours and reform needs in the deepfake context of India.
Legal Framework
There is no deepfake-specific statute in India; existing laws are applied. The Constitution guarantees freedom of speech but permits reasonable restrictions in the interests of sovereignty, public order, defamation, etc. [6]. Courts have held that free speech should be counterbalanced by individual dignity and the right to privacy [14][5]. In K.S. Puttaswamy v. Union of India (2017), the Supreme Court held that privacy constituted a part of the right to life, therefore securing a person’s control over their image and personal data [5]. Thus, deepfakes implicate both Art.19 and Art.21 interests.
Key statutes include:
- Information Technology Act, 2000: Sections 66C and 66D punish identity theft and online cheating by personation; these could apply when deepfakes impersonate an individual to defraud or deceive [4]. Section 66E criminalizes capturing or publishing images of a private area of a person without consent—intended to protect privacy against voyeuristic attacks, which can cover non-consensual deepfake pornography [4]. Section 67 penalizes transmitting obscene electronic content [4]. Section 69A allows government orders to block online content for public order, upheld with due process safeguards in Shreya Singhal v. UOI [15]. Section 79 grants intermediaries limited liability contingent on compliance with prescribed ‘due diligence.’
- Indian Penal Code 1860 (Bharatiya Nyaya Sanhita, 2023): Defamation (IPC §§499– 500, replaced by BNS §356) can apply to false statements made via synthetic media harming reputation [16]. Criminal intimidation (IPC §503, BNS §351) and cheating (§319) may cover coercion or deception via deepfakes. Voyeurism (§77) penalizes disseminating private acts of women, overlapping with non-consensual deepfake pornography. Forgery offences (§336) might apply when falsifying documents or signatures [17][18].
- Constitutional Rights: Article 19(1)(a) covers speech, including some parody/satire, but not defamatory or obscene content [14]. Article 21 safeguards digital privacy. Deepfakes infringing privacy or bodily integrity may raise Art.21 issues, provided any restrictions comply with Art.19(2) [6].
- Other Laws: The Digital Personal Data Protection Act, 2023 mandates consent for processing personal data (including biometric images) [19]. Unauthorized use of likeness in a deepfake could breach consent provisions (§6), inviting penalties up to ₹250 crores [19].
In summary, India’s legal “spine” for deepfakes is a patchwork. As one scholar notes, “provisions under the IT Act 2000, IPC 1860, [and] BNS 2023… offer partial protection, but they lack specificity for deepfake-related harms” [3].
Judicial Interpretation
Indian Courts have started applying these laws to deepfake-related issues, in the absence of deepfake-specific precedents. At the highest level, Shreya Singhal v. Union of India (2015) is instructive: the Supreme Court struck down the notorious Section 66A (vaguely worded offence) but upheld the constitutional validity of Section 69A’s blocking powers, emphasizing due process in curbing online speech [15]. This implies that any digital content restriction—including deepfakes—must meet the requirements under Article 19(2) and adhere to procedural safeguards.
No landmark Supreme Court case has yet defined a “deepfake” offence. However, on personality rights, Titan Industries Ltd. v. Ramkumar Jewellers (Delhi HC, 2016) recognized a right to publicity in the commercial context: unauthorized use of an individual’s identity for gain violates personal rights (though Titan dealt with cartoons, it affirms control over one’s image). More recently, in 2023–25, several Delhi High Court injunctions demonstrate judicial protection of identity against AI misuse. For instance, the Court restrained websites from exploiting Abhishek Bachchan’s image, finding that AI-generated misuse (without consent) injures personality rights [20][21]. In Titan Ind. and related dicta, courts affirmed that unauthorized use of one’s name or likeness—especially for commercial or harmful purposes—can be enjoined, a doctrine now being applied to AI deepfakes.
The courts have also iterated many times that the rights to privacy and reputation have primacy over unbridled speech. In Indian Express Newspapers v. Union of India, the Supreme Court held that free speech must be balanced against the right to reputation and dignity. In the context of parody and satire, cases like Khushboo v. Kanniamal and Ashutosh Dubey v. Netflix expanded permissible speech but stressed limits: defamation and obscenity are not protected under Article 19. By extension, an AI-generated deepfake parody of a public figure might be permissible as satire, but a deepfake that maliciously distorts facts to defame or incite would lose protection. In Subramanian Swamy v. UOI, the Court explicitly held that defamation lies outside the ambit of free speech. Thus, an AI-generated defamatory deepfake should be actionable like any other defamation under IPC.
Procedurally, the courts have generally granted timely relief. The Delhi HC has passed general orders directing social media intermediaries to expeditiously takedown deepfake content. In one seminal order, the Court held that it “cannot turn a blind eye” to unauthorized AI-generated content and restrained platforms from posting such material [23]. These orders are often based on broad principles of fairness and tort law, since specific statutory provisions do not exist. Collectively, judicial constructions help bridge some gaps: personality and privacy rights are treated as analogous to conventional rights, allowing interim relief against deepfake misuse. C.
Critical Analysis
These uses of existing law notwithstanding, there are significant challenges.
First, legislative gaps remain. Most relevant sections under the IT Act presuppose specific conditions. Thus, Section 66E applies only to capturing intimate images; a deepfake satire or political video using a person’s likeness may not involve “private parts” or sexual intent and may fall outside the ambit of §66E. Similarly, the IPC is confined to discrete harms (defamation, fraud, obscenity) but lacks an offence for merely creating misleading synthetic media. As Kashyap notes, existing laws afford only “partial protection” and “lack specificity” when it comes to deepfake harms [3]. Enforcement agencies are forced to fit deepfake cases into existing categories (e.g., fraud or hacking), resulting in uncertainty.
Second, technical and evidentiary barriers hinder enforcement. Detecting deepfakes usually requires specialized tools and expertise. Police and prosecutors are not always equipped with forensic AI analytics. By the time courts hear a case, evidence may be deleted or irreversibly spread online. Additionally, attribution is difficult: identifying the original creator of a deepfake—who may be overseas and anonymous—is a major impediment. As one commentator notes, Indian courts currently lack a standard mechanism to address cross-border deepfake abuse, leaving victims reliant on diplomatic channels and piecemeal conflict-of-laws principles [24].
Third, jurisdictional limits exacerbate the problem. Deepfake videos spread across international platforms. Even if Indian courts issue injunctions (as in Delhi HC cases), enforcing them against foreign creators or publishers may be impossible. Section 79 of the IT Act grants Indian intermediaries safe harbor only if they comply with guidelines, but it does not bind foreign entities hosting content on overseas servers.
Fourth, constitutional balance remains delicate. Overbroad laws risk chilling speech. For example, if a law criminalized “all AI-generated imitation” without exception, it could sweep in lawful parodies or political satire. The Supreme Court struck down the 2023 “Fact Check Unit” rule (which empowered the government to censor “fake” content without clear criteria) as unconstitutional [25]. This underlines the need for precision and due process: any deepfake regulation must clearly define offences and ensure judicial review to satisfy Article 19.
Fifth, intermediary compliance is inconsistent. The IT Rules 2021 impose duties on platforms (timely content removal, grievance redressal, etc.), but enforcement has been erratic. In practice, social media sites may apply filters unevenly—over-removing borderline content (overcensorship) or failing to act on clear deepfakes (under-enforcement). A recent legal analysis observes that leaving content assessment to platforms “may lead to varied standards” and proposes licensing or mandatory labeling for AI makers and creators [26]. This indicates a growing realization that mere reliance on intermediaries is insufficient.
Finally, enforcement data reveal a gap between complaints and convictions. Available crime reports and expert analyses suggest a low conviction rate for cyber offences compared to the number of incidents. Although deepfake-specific statistics are scarce, the general trend shows
victims often face frustration due to slow legal remedies. In short, India’s approach suffers from an “enforcement–legislation disconnect” [27]. Without faster procedures or updated legal tools, abuses may continue largely unchecked.
Recent Developments
Recognizing these issues, India and other jurisdictions are updating rules. Notably, IT Rules Amendment 2025: In November 2025, the government amended the Intermediary Guidelines (Rules, 2021) to address AI-generated content [13]. The new rules officially define “synthetically generated information” as any content artificially created or modified by computer to appear authentic [28]. Platforms are now required to prominently label AI-generated content — for images and videos, a watermark covering at least 10% of the surface; for audio, a verbal notice lasting 10% of the duration [29]. Users must declare at upload if content uses AI, and platforms must implement “reasonable” technology to verify such declarations [30]. Crucially, platforms must proactively remove harmful AI content (without waiting for a court order) or risk losing safe-harbor immunity under ITA §79 [31]. These provisions—echoing EU and Chinese rules— mark a significant step toward regulatory control, aiming for transparency and swift takedowns [29][32].
Also, the Digital Personal Data Protection Act, 2023 (to come into force shortly) makes consent requirements more stringent. It specifically mandates consent for processing biometric or avatar data [19], which includes deepfaking a person’s face or voice. The Act imposes penalties of up to ₹250 crore for grievous breaches, providing leverage against misuse of personal data in AI.
In penal law, the Bharatiya Nyaya Sanhita, 2023 (BNS) replaced the IPC on 1 July 2024 [33]. The BNS recasts offences but retains provisions relevant to deepfakes. For example, BNS §353 penalizes inciting public disorder (misinformation), §356 codifies defamation, and §§318–319 cover cheating and personation [19]. These may apply to malicious deepfakes—for example, fabricated speech that provokes unrest. Similarly, the new Bharatiya Nagarik Suraksha Sanhita (CrPC) and Sakshya Adhiniyam (Evidence Act) modernize procedure and admissibility, including standards for electronic evidence [34]. Notably, Evidence Act §63 requires authentication of electronic records (hashes and origins), which could help courts distinguish between fake and genuine content [34].
Meanwhile, enforcement agencies and regulators have issued advisories. For instance, MeitY (Nov 2023) ordered platforms to remove deepfakes targeting women within 36 hours [35]. Election authorities (2024) required political parties to take down deepfake posts within 3 hours during campaigns [36]. CERT-In (2024) recommended watermarking and AI-detection tools for platforms [37][38]. Significantly, the government’s Fact-Checking Unit rule (Sept 2023) was struck down as unconstitutional [25], reflecting judicial insistence on procedural safeguards.
Globally, India’s moves align with international trends. The EU’s AI Act (2024) imposes strict transparency and liability rules, including mandatory watermarking of generated content [39]. China’s Cyberspace Administration (2023) likewise requires labeling of synthetic media and bans illegal deepfakes [32]. In the US, federal and state proposals have proliferated: the DEEPFAKES Accountability Act (2023) targets malicious deepfakes, and the TAKE IT DOWN Act (2025) criminalizes non-consensual explicit deepfakes [12]. India’s 2025 IT Rules now join this global framework, mandating labels and empowering platforms to act without prior permission [29][31]. Collectively, these global parallels highlight a growing recognition that deepfakes can undermine democracies and must be curtailed through practical, transparent regulation.
Suggestions / Way Forward
A multi-pronged approach is required to bridge these gaps. For one, deepfake harms need to be explicitly defined and penalized through legislative reform. Parliament could enact a Synthetic Media Act or amend the ITA/BNS to include “unauthorized synthetic replication of identity with intent to harm” as an offence. Penal provisions may further be calibrated by harm—for instance, higher penalties for non-consensual pornography or election meddling. Thus, a more harm-based approach—targeting malice rather than mere speech—would also align with constitutional safeguards. Indeed, Kashyap’s analysis recommends “graduated harm-based penalties, platform accountability standards, and victim compensation mechanisms” in a new synthetic media law.
Platform regulation needs to be enforced. Social media and technology companies should implement secure deepfake detection and watermarking, such as through the C2PA standard or AI-detection APIs. The government may require periodic transparency reports, as under the IT Rules, and audit compliance. Loss of safe harbor under §79 should be strictly applied against repeat offenders. Education and best practices through industry collaboration should also be promoted; this could include evaluations by an independent certification authority to ensure compliance of AI-generated media tools.
Judicial and police training is equally essential. Investigators must be trained in digital forensics to trace deepfake origins. Courts and law enforcement need clear procedures for handling deepfake evidence, consistent with the new Evidence Act provisions. Fast-track mechanisms—such as emergency injunctions or e-FIRs under BNSS—would provide victims quicker relief. Cybercrime cells (e.g., at I4C) should be equipped with deepfake forensic laboratories.
Public awareness and reporting mechanisms are vital to mitigate harm. Citizens need education about AI-generated content and how to verify or report deepfakes. A dedicated cyber tipline or complaint portal could accelerate responses. The government should also work with international organizations to improve cross-border coordination—such as establishing treaties on cyber evidence sharing.
Finally, while policing misuse, respect for legitimate AI use must be maintained. Regulations should clearly exclude bona fide satire, art, and research. Overbroad censorship risks chilling innovation and free expression. Hence, a targeted regulatory framework—focusing on malicious intent, non-consensual use, and clear legal breaches, while protecting fair use—is ideal.
In sum, India requires a balanced, multi-stakeholder regime that integrates technological standards, legal clarity, and rights protection to tackle deepfakes effectively without hindering digital innovation.
Conclusion
AI-driven deepfakes present an unprecedented challenge to India’s legal system. This article has examined how existing Indian law addresses synthetic media: constitutional guarantees of speech and privacy, the IT Act’s cyber offences, and penal provisions on defamation, fraud, and obscenity all contribute to the framework. Judicial decisions—most notably from the Delhi High Court—reflect growing recognition of personality and privacy rights against deepfake misuse.
However, “critical gaps” persist in the legal architecture: no provision explicitly mentions “deepfakes,” and enforcement remains technically and procedurally complex [3]. While recent reforms—such as amendments to the IT Rules and new data and penal laws—represent progress, India must supplement these with dedicated legislation and strong enforcement. Coordinated action among legislators, the judiciary, and technology platforms is essential to balance innovation with fundamental rights, thereby safeguarding individuals and democracy from AIgenerated deception.
References / Bibliography
- Indian Constitution (1949), Articles 19, 21.
- Information Technology Act 2000 (India), ss. 66C, 66D, 66E, 67, 69A, 79.
- Bharatiya Nyaya Sanhita 2023 (India), ss. 77, 111, 318, 319, 336, 351, 353, 356.
- Digital Personal Data Protection Act 2023 (India), ss. 4, 6.
- Shreya Singhal v. Union of India, (2015) 5 SCC 1.
- K.S. Puttaswamy v. Union of India, (2017) 10 SCC 1.
- Indian Express Newspapers (Bombay) Pvt. Ltd. v. Union of India, (1978) 3 SCR 169.
- Khushboo v. Kanniamal, (2004) 2 SCC 1.
- Ashutosh Dubey v. Netflix Inc. (2023) 5 SCC 391.
- Subramanian Swamy v. Union of India, (2016) 7 SCC 221.
- Titan Industries Ltd. v. Ramkumar Jewellers (2016) 216 DLT 588 (Delhi HC).
- Kashyap, Sommya, ‘The Digital Mirage: India’s Evolving Legal Battle Against Deepfake Technology’ (2025) 22(2) SCRIPTed [Open Access Journal] 162[3].
- Bajpai, Yash, “Me, Myself and AI: Chasing Deepfakes Across Borders Without Losing Your Rights” (SCC Online Times, 8 November 2025)[40][5].
- Anand, Dhruv & Khanna, Dhananjay, “Fighting deepfakes needs nimble but realistic laws” (Law.Asia, 28 November 2025)[29][31].
- Government of India, Ministry of Electronics & IT, “Deepfakes in India: Legal Landscape, Judicial Responses, and a Practical Playbook for Enforcement” (NeGD blog, 29 Sept. 2025)[41][19].
- Suresh, R., et al., “Deepfake Evidence and the Indian Criminal Justice System” (2025) 7(6) IJFMR 3313[12][32].
- Gupta, Pariansh & Dixit, Srishti, “Analysing Legal Framework of Regulating Deepfake Technology and Misinformation in India” (2025) 6(10) IJRPR 3296[4][42].
- Bollywood stars move Delhi HC to protect personality rights amid AI, deepfake threats, Daijiworld News (12 Dec. 2025)[1][20].
- Salman Khan to Aishwarya Rai: Why Bollywood stars are rushing to court to protect their personality rights, Financial Express (11 Dec. 2025)[2][23].
- Drishti IAS, “AI Generated Content Regulation in India” (25 Oct. 2025)[7][43].
- European Union, Artificial Intelligence Act (2024) (EU Regulations on AI).
- Cyberspace Administration of China, “Regulations on Generative AI” (2023) (labeling requirement).
- DEEPFAKES Accountability Act, H.R. 3438, 118th Cong. (2023).
- TAKE IT DOWN Act, S. 1603, 119th Cong. (2025).
[1] [20] Bollywood stars move Delhi HC to protect personality rights amid AI, deepfake threats – Daijiworld.com https://www.daijiworld.com/news/newsDisplay?newsID=1300554
[2] [21] [23] Salman Khan to Aishwarya Rai: Why Bollywood stars are rushing to court to protect their personality rights – Entertainment News | The Financial Express https://www.financialexpress.com/life/entertainment-salman-khan-to-aishwarya-rai-whybollywood stars-are-rushing-to-court-to-protect-their-personality-rights-4073159/
[3] [10] [11] SCRIPTed article template https://journals.ed.ac.uk/scripted/article/download/12004/14850/42622
[4] [17] [18] [42] ANALYSING LEGAL FRAMEWORK OF REGULATING DEEPFAKE TECHNOLOGY AND MISINFORMATION IN INDIA https://ijrpr.com/uploads/V6ISSUE10/IJRPR53926.pdf
[5] [14] [16] [22] [24] [40] Chasing Deepfakes Across Borders & Protecting Rights https://www.scconline.com/blog/post/2025/11/08/deepfake-regulation-rights/
[6] [9] [15] [19] [25] [33] [34] [35] [36] [37] [38] [41] Deepfakes in India: Legal Landscape, Judicial Responses, and a Practical Playbook for Enforcement – NeGD – National e-Governance Division https://negd.gov.in/blog/deepfakes-in-india-legal-landscape-judicial-responses-and-a practicalplaybook-for-enforcement/
[7] [43] AI Generated Content Regulation in India https://www.drishtiias.com/daily-updates/daily news-editorials/ai-generated-content-regulationin-india
[8] [13] [26] [28] [29] [30] [31] India Tightens Rules on Deepfakes and AI-Generated Content | Law.asia https://law.asia/india-deepfake-regulations/
[12] [32] [39] ijfmr.com https://www.ijfmr.com/papers/2025/6/60298.pdf
[27] The Digital Mirage: India’s Evolving Legal Battle Against Deepfake Technology | SCRIPTed: A Journal of Law, Technology & Society https://journals.ed.ac.uk/scripted/article/view/12004





