Authored By: Vikrant Madhurjya
NEF Law College
- Introduction
In this digital world, where every human being has some sort of engagement to digital or technological form of commodities, some falls to victim to the dark side of this virtual experience. The world has progressed in such lengths that almost everything and anything has been turning into digitalized from – finance, education, communication, art, trade, healthcare, and even law, etc. In recent years, there has been an emergence called “Artificial Intelligence” and is drastically progressing the world in many extensive ways. Even though, it has its benefits there are also several drawbacks mainly including psychological effects, such as – laziness, low attention, cognitive offloading, decision fatigue, etc. Now, let us focus on our real problem, that is the birth of deepfakes using artificial intelligence that has impacted peoples leaving a huge impact on our society. Deepfakes are fake imaginary visual objects in the form of image, video, or audio which are created by the help of Artificial Intelligence (AI) . Since, AI is accessible to every individual has also led to access to individuals who lacks moral responsibility, leading to misuse of Artificial Intelligence in ethical regulation. So, In this legal article our research question is – “To what degree do current detection technologies, platform moderation policies, and Indian laws fail to mitigate the ongoing harm to victims of non-consensual deepfakes, and what ways we can prevent re-traumatization”. Here, we will cover how emergence of deepfakes is creating problems in our digitally transformed society, what ways we can prevent the such harm and the urgent need of stronger laws & policies. As. This issue is not just technological but deeply connected people’s privacy, dignity and reputation. Due to the rapid spread of deepfakes has also raised concerns about digital consent and the responsibility of online intermediaries Hence, addressing deepfake harm requires not only legal reform but also technological safeguards, public awareness, and coordinated regulatory responses from both the State and digital platforms. The discussion made in this article seeks to contribute to the changing discourse on digital rights, accountability, and the future of regulation in the age of artificial intelligence.
What are “Deepfakes” and How do they emerge?
Definition –
Deepfakes refers to the virtual creation of fake imaginary image, video or audio through artificial intelligence by using AI tools, such as – Specialized Deepfake Software, AI Websites, Mobile Apps, Voice Deepfake Tools and Custom AI Models. Deepfakes, through the use of these tools, can completely create a fake virtual visual of real people or events; and mostly used for entertainment or marketing.
Emergence of Deepfakes –
To understand the emergence of deepfakes, we must clearly look into the history and how it came into effect.
In year 1990s – It can be traced back to 1990s, where researchers attempt to experiment with CGI to create realistic human images, leading to starting point of synthetic media
In year 2010s – Progress in machine learning, big datasets, and computing power greatly improved image and video synthesis and deepfake technology began moving from theory to practical experimentation.
In year 2014 – This year was an major breakthrough, when Ian Goodfellow and his team introduced Generative Adversarial Networks (GANs). GANs allowed AI systems to generate highly realistic images, video, and audio which became the technical foundation for most modern deepfakes.
In year 2017 – This is the year when tools like DeepFaceLab became open-source and publicly accessible, allowing ordinary users to create and generate deepfakes. Technology moved beyond research labs and began appearing online leading to misuse, including non-consensual content, also started emerging.
In year 2020 – This year OpenAI released GPT-3 an generative AI expanded from visuals into human-like written text. This marked the rise of broader generative AI systems.
In year 2021 – Multi-Media Improvements of Deepfakes tools by improved across of voice cloning, lip-sync, and motion transfer became more realistic.
In year 2022 – This year allowed Deepfakes to be easily created by Stable diffusion allowed high-quality image generation on personal devices. ChatGPT made generative AI widely known to public. Which shifted Deepfakes to everyday digital creations.
In year 2025 – Finally. On 2025 Deepfakes became part of people’s everyday lives. Now., this synthetic media is seen all over social media due to the growth from early years till date. Major tech companies limited AI outputs as demand surged. Generative AI became both a useful tool and a security concern.
Impact of Deepfakes on Innocents Victims
The progression and evolution of technology and digital means have hugely benefited society in many areas of development, but also raised concerns about its misuse for its harmful effects. AI tools have given rise to the emergence of deepfakes, and their open-source public access leads to their consumption for unethical and unlawful purposes. And recently, deepfakes are all around the world, whether it be social media, pornographic websites, entertainment & media industries, news & political campaigns, etc. Due to this, deepfakes, though they are fake, virtual, or imaginary, are causing real-world crises, problems, and issues that are raising national concerns. In India, the safety of the people against this is crystallized with the enforcement of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026[2], effective from February 20th, 2026. Even though there are many laws protecting innocent victims from such unfortunate events, there are still some gaps open. Now let us discuss them –
Pornographic Content – Over 95-98% of deepfakes online are pornographic, with 90-99% targeting women (often non-consensual). Tools like DeepFaceLab or “nudify” apps enable individuals lacking in morality to misuse them by swapping a strangers face with adult materials. Victims mostly consist of ordinary women, celebrities, influencers and even under-age young girls/minors. Allowing image-based sexual abuse (IBSA), where their likeness is exploited without consent. This violates constitutional rights, statutory rights and other existing laws, including – Article 21 of the Indian constitution[3], Section 66C, 66D, 66E, 67. 67A under Information Technology act, 2000[4].
The case of Payal Dhare, also known as Payal Gaming, is a well-known YouTuber and with millions of followers for her gaming and esports content, became a recent Indian example of how non-consensual deepfake pornography can harm any innocent people, no matter the age, gender or fault. A pornographic video of her spread over the internet that falsely claimed to be her in December 2025. She publicly denied any involvement and said that the clip was fake and used her name and image without her consent and permission. People online also said that the material looked like it had been made by AI or digitally created. The video going around made people very curious about her personal life and damaged her reputation and feelings. She said it was humiliating, dehumanizing, and hurt her dignity, respect and family. People talked about the incident in public, saying that her likeness had been used without her permission to falsely show sexual behavior. This got the attention of cyber authorities and led to warnings against sharing unverified obscene content online, as well as possible legal action. This incident shows how pornographic deepfakes break privacy, dignity, reputation, and identity rights. It also shows how the Information Technology Act’s existing remedies for privacy breaches, electronic obscenity, impersonation, and defamation are used even though India doesn’t have a specific deepfake law. This shows how the gap between technological misuse and legal protection is getting bigger.[5]
Open-Source Public Access increases the problem – The main issue here is the open-source public accessibility of AI tools, such as DeepFaceLab, that allows individuals to misuse these tools very easily due to a lack of restriction and free access. Though it may allow creativity and allow people to explore their creative side freely, people with low morals and unethical intentions may cause reputational harm. Also, due to advancement of these AI tools, a person doesn’t need any professional skills or knowledge that they need to learn to create deepfakes, allowing contributions to its misuse and damage in a huge manner. Thus, generating thousands of sexualised images hourly.
Psychological Harm Persists Despite Takedowns – Even with 2-3 hours removal, under the Information Technology Act, 2000, the damage is still prevalent and irreversible. Victims often experience Post traumatic Stress-Disorder (PTSD), symptoms include – anxiety, depression, shame, guilt, hopelessness, etc. These Contents after it’s exposed on the internet it archives on the dark web, is screenshotted/shared offline, or even re-uploaded. Victims face constant fear of discovery by family, employees, or communities. Because of this, it risks ostracism, isolation, career-risk, job loss, employment issues, relationship breakdown, etc.
No Compensation for Damages – India’s 2026 Rules prioritize platform duties and risk safe habor loss for non-compliance, but offer no direct victim compensation. No statutory damages, rehabilitation funds, o dedicated civil remedies exist for psychological/economic losses. Victims have to rely on slow civil suits or criminal provisions causing hurdles in evidence, anonymity of creators, and jurisdictional issues. Globally, few jurisdictions (like some US states, proposed federal bills like TAKE IT DOWN/DEFIANCE Act) provide damages; but India lags in victim-centric support.
Legal Frameworks & Comparative analysis
In India, there is not any law that governs or controls deepfakes. Instead, it mixes with already existing laws, such as the Bharatiya Nyaya Sanhita, 2023[6], the Information Technology Act 2000, and other intermediary laws. The Information Technology Act 2000 is the heart that criminalizes cybercrimes, including identity theft, cheating by impersonation, privacy violations, and publishing obscene content illegally. With sections 66C, 66D, 66E, 67, and 67A. These laws were passed way before deepfakes came into existence in common hands, and now these sections are used to this date for manipulation of data or online harassment.
A major development occurred with the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026, which came into force on 20 February 2026. These rules explicitly bring “synthetically generated information” within regulatory oversight and impose new duties on platforms. Intermediaries must now clearly label AI-generated or altered content, maintain metadata transparency, and remove unlawful synthetic media within extremely short timelines—generally three hours, and even faster for intimate or harmful content. Failure to comply may lead to loss of safe-harbour protection under Section 79, exposing platforms to direct liability.
Beyond the IT framework, victims may rely on criminal provisions in the Bharatiya Nyaya Sanhita 2023, along with laws protecting women, children, elections, and workplace dignity, demonstrating India’s multi-statutory but fragmented approach.
Comparatively, foreign jurisdictions are moving toward more targeted deepfake regulation. The European Union uses a risk-based approach through AI governance and data protection regimes, while the United Kingdom has recently criminalised non-consensual intimate deepfake imagery and imposed stricter platform responsibilities for removal and prevention.
Thus, while India’s 2026 amendments represent a significant shift toward proactive intermediary accountability, its framework still relies heavily on adapting older cyber and criminal laws. In contrast, many foreign systems are increasingly adopting explicit deepfake-specific offences, signalling a global transition from reactive liability to preventive regulation.
Recommendations & Suggestions
Some recommendations & suggestions for protecting innocent victims are as follows –
- Build-in Safeguards in AI Tools – AI Tools itself should have some sort of protection and restriction from using third party voices/images/faces without verification of consent for such use. And such use, should be available for professional users only with licenses and ordinary users should be restricted from such use. And there should be default blocks on Celebrity/Public Figure Biometrics.
- Platform-Level Prohibitions – Require proactive scanning/filters for prohibited synthetically generated information categories, such as – pornographic, defamatory swaps, fraud audio, before generation.
- Victim Compensation – Victims suffering from such unfortunate harm falls into psychological issues, such as – depression, anxiety, isolation, etc, and should introduce automatic statutory compensation/funds for trauma via amendments or dedicated cyber-victim schemes. As countries like United States has already done this through TAKE IT DOWN Act (2025).
- Public Awareness – It is one of the most powerful tools against deepfakes. Public should be made aware how to protect their identity and how to report it to cyber authorities in case of misuse of their personal information. Ways public can protect themselves is to privatize themselves on social media, such as – keep account private, limit sharing of images, audio or video with strangers, and be cautious of suspicious activities.
Conclusion
The discussion in this article demonstrates that deepfakes are no longer a future technological risk but a present social and legal crisis. While artificial intelligence has contributed enormously to innovation and digital growth, its misuse through non-consensual synthetic media has exposed serious gaps in detection systems, platform governance, and legal protection. The key finding is that the harm caused by deepfakes is not merely technological but deeply human—affecting privacy, dignity, mental health, reputation, and social standing. Existing Indian provisions under cyber, criminal, and intermediary laws offer partial remedies, yet they remain reactive and fragmented. Even with the 2026 regulatory developments, enforcement challenges, lack of compensation mechanisms, and the rapid spread of content continue to leave victims vulnerable to repeated trauma.
Comparatively, jurisdictions such as the European Union, the United Kingdom, and the United States are gradually moving toward clearer deepfake-specific offences and stronger victim-centred approaches, highlighting the direction in which reform must progress. For India, the way forward requires a combination of legal, technological, and institutional responses.
Meaningful reform should include explicit statutory recognition of deepfake offences, mandatory safeguards embedded in AI systems, stronger intermediary accountability, fast-track removal mechanisms, and dedicated compensation or rehabilitation schemes for victims. Equally important is public awareness, digital literacy, and coordinated action between the State, platforms, and civil society.
Ultimately, regulating deepfakes is not simply about controlling technology; it is about preserving trust, protecting identity, and ensuring that digital progress does not come at the cost of human dignity and rights.
Reference(S):
- Reality Defender, A Brief History of Deepfakes, Reality Defender, https://www.realitydefender.com/insights/history-of-deepfakes (last visited Feb. 20, 2026).
- Payal Dhare (Payal Gaming), “Maharashtra cyber police probe pornographic deepfake videos,” Hindustan Times, Aug. 2025, available at –https://www.hindustantimes.com/india-news/maharashtra-cyber-police-probe-pornographic-deepfake-videos-of-influencer-payal-gaming-1017250xxxx.html
- Information Technology Act, 2000, §§ 66C, 66D, 66E, 67, 67A, No. 21 of 2000, India Code (2000), https://www.indiacode.nic.in/
[1] Reality Defender, A Brief History of Deepfakes, Reality Defender, https://www.realitydefender.com/insights/history-of-deepfakes (last visited Feb. 20, 2026).
[2] Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026, G.S.R. 120(E), Gazette of India, Extraordinary, Part II, § 3, sub-sec. (i) (Feb. 10, 2026).
[3] India Const. art. 21
[4] Information Technology Act, 2000, §§ 66C, 66D, 66E, 67, 67A, No. 21 of 2000, India Code (2000), https://www.indiacode.nic.in/
[5] Payal Dhare (Payal Gaming), “Maharashtra cyber police probe pornographic deepfake videos,” Hindustan Times, Aug. 2025, available at:
https://www.hindustantimes.com/india-news/maharashtra-cyber-police-probe-pornographic-deepfake-videos-of-influencer-payal-gaming-1017250xxxx.html
[6] Bharatiya Nyaya Sanhita, No. 45 of 2023 (India)





