Authored By: Mansi Yadav
Maharshi Dayanand University (MDU-CPAS)
Abstract
Deepfakes, powered by AI like GANs and diffusion models, pose risks from harassment to electoral manipulation. Existing laws (e.g., India’s IT Act ss 66E, 66C-D; US TAKE IT DOWN Act; EU AI Act) offer reactive remedies via privacy, defamation, and platform liability, but gaps in definitions, enforcement, and proactivity persist. This article evaluates these frameworks, global initiatives, and proposes risk-based regulation, labelling, and international cooperation for balanced accountability.
Introduction
Today, with the advent of generative artificial intelligence tools that can produce hyper-realistic digital audio, video, photos, and text, deepfakes have rapidly shifted from a novelty that only tech-savvy enthusiasts could understand to a mainstream issue that society as a whole needs to consider. Deepfakes can be used to manipulate a person’s voice, image, or activities in a manner that seems completely believable, thus having a wide range of potential applications, from creative expression to malicious activity. But ironically, although the very features that make deepfakes so fascinating also mean that they can cause a great deal of harm, society finds itself confronted by a pressing question that every nation has to consider: can current law accommodate this new technology, or do we require new legislation?
The article discusses how current laws have been used, the shortcomings of these laws, and how current regulation frameworks are dynamically shifting worldwide to combat the negative impact of deepfakes.
Knowing about Deepfakes and Threats Created by Them
Prior to examining legal frameworks, it becomes crucial to understand the meaning of deepfakes. Deepfakes are artificial media created by AI algorithms involving machine learning components such as generative adversarial networks (GANs) or large language models, often used for manipulations or for generating real imagery, audio, or video of individuals speaking or performing actions that were never carried out in reality.
The negative impacts of deepfakes can be far-reaching in relation to:
- Personal harm: Non-consensual intimate deepfakes, not consensually requested, could be employed for harassment, blackmail, or reputation attacks.
- Political manipulation: Deepfakes may employ manipulation of political choices that deceive voters and lead to social unrest.
- Frauds & Security Risks: The misuse of voice and video deepfakes can be done for impersonating others with the intention of carrying out financial frauds or bypassing security systems.
- Lack of trust in public evidence: Deepfakes’ widespread use erodes trust in digital evidence, due to the lack of ability to differentiate between real evidence and fake evidence without tools.
These complex risks have been identified as posing challenges to current legislation, which has been established for analogous harms that predated the use of artificial intelligence for the development of ubiquitous synthetic media.
How Existing Legislation Has Been Used to Combat Deepfakes
Civil and Criminal Law Applied Retroactively
One of the first issues that arose for legal scholars was whether existing legislation could be used to deal with the problem of deepfakes. In most cases, deepfakes are dealt with not by technology-specific legislation, but by general legal frameworks such as:
- Privacy and consent: There are also regulations making the distribution of sexual images without consent a crime that have been extended to deepfakes. For example, in India, Section 66E of the Information Technology Act, 2000, penalises the intentional capture, publication, or transmission of private images without consent, increasingly applied to deepfake likenesses.
- Identity Theft and Impersonation: Sections of the law about cybercrime that provide penalties for impersonation, cheating, or related offences continue to expand into deepfakes that use identity deception, for instance, Sections 66C and 66D of the IT Act of India.
- Obscenity and defamation: Existing legislation that is criminal in nature, which pertains to the distribution of obscene material or defamation, has been applied in cases of reputational harm or the dissemination of obviously inappropriate material through deepfakes.
- Platform liability regimes: Section 79 of the Indian Information Technology Act, for instance, can force platforms to remove illegal content in order for the platforms to retain the legal shield.
While such legal frameworks may occasionally catch the misuse of deepfakes, such regulations are faced with many challenges. These regulations are generally reactive, trying to punish the misuse that has already occurred instead of trying to prevent it, and were not designed with the specific characteristics of deepfakes in mind.
Critical Gaps in The Present Laws
- Lack of Technology-Specific Definitions
An important challenge is that most of the traditional laws lack a definition of what deepfake or synthetic media is. This creates ambiguity to determine when a law should be applied.
For instance, the EU’s ever-changing regulatory framework on AI—the EU AI Act—is trying to outline what synthetic or manipulated content might mean and when transparency requirements should be applied, though definitions have been subject to debate by parties engaged with this topic. Critics argue that overly broad or vague definitions risk under-regulation or over-regulation.
- Fragmented Legal Coverage
In most jurisdictions worldwide, the regulation of deepfakes is divided or does not even exist in some pieces of legislation. In the USA, for instance, until recently, there was no federal legislation that in some way governed deepfakes. This has been solved through the enactment of specific laws, including the “TAKE IT DOWN Act”, which mandates platforms for the removal of inappropriate intimate deepfakes within 48 hours of receiving notice. Each state has its own respective legislation covering audio deepfakes in the USA, including the ELVIS Act in Tennessee.
- Issues Regarding the Enforcement
In cases where the law regulates certain consequences of deepfakes, the following are some of the reasons why enforcement remains challenging:
- Anonymity & Cross-Border Creation: Individuals who generate deepfakes might do so in jurisdictions that do not enforce well, making it difficult to trace them.
- Volume & Speed: “The scale of content creation on massive platforms is beyond the scale of traditional notice and takedown procedures that have existed on the net.”
- Detection Complexity: The task of distinguishing a deepfake video from a real one or from authentic content may require a complex and sophisticated forensic process and analysis, and it may result in delaying legal action against such a video.
International and National Regulatory Initiatives
To address these challenges, many countries are developing AI-tailored models blending traditional enforcement with tech-specific duties.
European Union: AI Act & Digital Services Act
The European Union leads the way with regulatory experiments.
- The AI Act classifies deepfakes as ‘limited risk’, mandating transparency (e.g., watermarking) from February 2025, with prohibitions for unacceptable risks.
- The Digital Services Act (DSA) further strengthens the role of platforms with respect to unlawful or harmful contents, which need to be quickly removed and reported.
Further, individual countries like Spain have already approved a draft law regarding the need for consent when using a person’s image and voice in AI content. This is part of the greater EU vision of limiting the use of non-consensual deepfakes.
United Kingdom: Strengthening Criminal Offenses
The UK has recently come up with legislation to criminalize the use of non-consensual intimate images that use deepfake technology, treating such acts as a priority under their Online Safety Act.
United States of America: Actions by the Federal and State Government
In the US, the federal government introduced the TAKE IT DOWN Act as a direct measure to deal with the issue of non-consensual use of deepfakes. The No Fakes Act (inoperative as of 2025) was an attempt to establish a national right of publicity for voices and images. X Corp took on the Minnesota DEEPFAKES Accountability Act of 2024 on constitutional freedom-of-speech principles, highlighting tensions with free speech.
IT Rules in the Indian Scenario: IT Rule Amendments and Judicial Acceptance
The 2023 IT (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules introduced ‘synthetic information’ under Rule 3(1)(b)(viii), mandating intermediary labelling and diligence. The move by the Indian government seeks to reduce ambiguity and address intermediaries’ lack of disclosure regarding deepfakes. Cases of harm caused by deepfakes are also being addressed by Indian courts. In a significant case pertaining to unauthenticated AI content of a prominent person, the Delhi High Court passed a dynamic injunction that requires speedy take-downs of content with the use of technology. This is because traditional remedies would not suffice.
Other Global Moves
Others are testing new legislation. Denmark wants facial expressions and voice to be separate rights granted copyright protection in order to deter people from using deepfakes. It can also be an offense in countries like South Korea for an individual to possess any form of deepfake sexual material.
Can Existing Laws Cope?
To this end, despite these regulatory efforts, one might still ask a core question: are the current laws enough to handle deepfake technology?
Strengths of Current Legal Regimes
Existing legal frameworks provide important tools:
- Immediate recourse: Under various national statutes regarding privacy, defamation, and cybercrimes, victims of deepfake harms can seek remedies.
- Platform obligations: EU’s DSA and India’s IT Rules are examples of the laws that obligate platforms to take action against illegal content.
- Judicial adaptation: The courts have been willing to interpret existing laws in light of new harms.
Together, they set the base for accountability and redress.
Limitations and Gaps
However, the technology’s newness and scale stress traditional laws in important ways:
- Reactive or proactive regulation: Most laws are reactive as they react after an harm occurs, whereas, in the case of deep fakes, there would be a proactive standard and measure that would require labelling and verification.
- Ambiguity related to definition and scope: The lack of definitions related to deepfake and synthetics media in laws, and consequently an even harder time differentiating harmful usage and permissible usage based on deepfakes and synthetics.
- Global coordination: Deepfakes immediately cross global borders; nonetheless, enforcement is a primarily local issue.
- Balance between innovation and restrictions: The restrictions to avoid abuse sometimes create concerns for balance in matters of freedom of expression or innovation, as seen in debates on acts such as the No Fakes Act in the U.S., in which some states have banned Deepfakes. Towards a Future-Ready Framework for Regulation.
In order to adequately address the issue of deepfake technology, the legal framework could have to develop in the following ways beyond the conventional law-making process:
- Clear Definitions & Risk-Based Regulation
It is recommended to start with definitions that use tech-neutral parameters to differentiate between harmless synthetic media and dangerous deepfakes. Risk-based approaches such as the EU AI Act appear promising because they depend on the level of endangerment.
- Transparency and Labeling Requirements
Making labeling of synthetic content visible and persistent, as has been made mandatory through the amendments in the IT Rules in India, can be highly empowering for users as well as prevent deceptive practices.
- Platform Responsibility with Safeguards
Platforms would have to be coerced into addressing deepfakes, but regulation could ensure that moderation requirements are adequate to protect free speech and do not result in over-censorship. Carefully crafted safe harbor provisions could strike a balance between liability protection and incentives for enforcement.
- International Cooperation
Because digital content has no borders, cooperation among countries for cooperation on mutual norms and standards in this regard is an absolute necessity. The Council of Europe and G-20 can play an effective role in this regard.
- Enforcement Open to Innovation
It is also said that regulation should foster technological solutions for deepfake detection and authentication. Offering incentives for research and adoption with public-private partnership-like initiatives can promote the capacity for legal enforcement.
Conclusion
Deepfake technology is among the most challenging frontiers in the combination of artificial intelligence, the legal system, and social aspects as regards the digital era. In regard to legislation, there are some ways to seek legal remedies and penalties, yet these laws were not developed to address the specific challenges created by artificial intelligence in synthetic media. Because of this, there is a trend to find new ways to address deepfake regulation.
A definitive solution to whether the current laws can handle the matter is complex, as some of them can be effectively addressed, but as the phenomenon of deepfakes becomes more widespread, the need for better definitions in the laws that regulate them will become necessary. A balance of innovation, accountability, and freedom of expression, as well as protection against harm, would be central to such developments.





