Home » Blog » Deepfakes and Digital Harm: Confronting Technology-Facilitated Gender-Based Violence in the Age of AI.

Deepfakes and Digital Harm: Confronting Technology-Facilitated Gender-Based Violence in the Age of AI.

Authored By: Hewan Solomon

Addis Ababa University

Abstract

Technology-facilitated gender-based violence (TFGBV) represents a disturbing expansion of traditional gender-based violence (GBV), as digital tools increasingly become instruments of abuse. Among the most troubling developments are deepfakes and AI-generated sexual content, which allow perpetrators to create hyper-realistic non-consensual images and videos that overwhelmingly target women. This phenomenon undermines dignity, safety, and civic participation while causing psychological trauma, reputational damage, and even political silencing. Studies show that 85% of women experience some form of online violence, with deepfake pornography alone increasing by 550% between 2019 and 2023. This article defines TFGBV and situates it within the broader continuum of GBV, before examining its forms with particular attention to deepfakes and AI-driven abuses. It explores the legal and policy responses across jurisdictions, assesses judicial interpretation, and critiques persisting gaps. Finally, it proposes survivor-centered solutions that combine legal reform, corporate accountability, and international cooperation. It argues that without urgent action, deepfake abuse risks becoming a dominant new front of GBV in the digital age.

Introduction

The digital revolution has transformed communication, commerce, and governance, but it has also created new opportunities for GBV. Perpetrators today deploy artificial intelligence to generate “deepfakes”: manipulated media in which a victim’s face is superimposed onto sexual images or videos without consent. These abuses exemplify how violence transcends physical spaces, extending into online environments with far-reaching consequences.

Traditional GBV encompasses physical assaults, sexual harassment, coercion, and psychological abuse. At its core lies the unequal power dynamics that subordinate individuals, particularly women, because of gender. UN Women defines GBV as deliberate harm inflicted on individuals due to gender, taking physical, sexual, or psychological forms.[1] This article examines the question: How do deepfakes and AI-generated sexual content constitute an emerging but under-recognized form of TFGBV, and what reforms are required to address them? It begins by situating TFGBV within the broader framework of GBV, supported by statistical evidence. It then explores the forms and impacts of deepfakes, analyses legal frameworks and judicial responses, and highlights contemporary challenges. Finally, it outlines practical steps forward, arguing for laws, policies, and cultural shifts that recognize TFGBV as real violence deserving of urgent redress.

Chapter One: Defining TFGBV and Linking It to GBV

1.1 Definition and Continuity with Traditional GBV

TFGBV refers to acts of violence carried out through digital tools social media platforms, messaging applications, or AI-driven technologies that target individuals based on gender. Although the medium differs, the essence remains the same: abuse rooted in patriarchal norms and unequal power relations. Just as domestic abuse exploits physical proximity, TFGBV exploits technological ubiquity, extending the reach of perpetrators beyond geographic or temporal limits.[2]

Perpetrators weaponize digital tools to replicate familiar forms of subjugation. The humiliation, silencing, and fear that victims experience mirror the dynamics of traditional GBV, but often with amplified consequences. The viral spread of intimate images, the permanence of online records, and the anonymity afforded to abusers intensify harm in ways not possible in offline contexts. TFGBV, therefore, is not a novel phenomenon detached from GBV but an evolution of the same oppressive logic applied in digital spaces.

1.2 Statistics and Prevalence

The prevalence of TFGBV underscores its urgency. Globally, 85% of women report experiencing violence online, while 38% have been directly targeted.[3] the scale of deepfake abuse is particularly alarming: between 2019 and 2023, deepfake pornography increased by 550%, with 98% of such content depicting women and 99% being pornographic.[4]

Chapter Two: Forms, Impacts, and Legal Responses

2.1 Deepfakes and AI-Generated Sexual Content

Among the many forms of TFGBV, deepfakes stand out as particularly insidious. These are AI-manipulated videos or images that convincingly replace a person’s likeness with another’s, often in explicit contexts. Free applications now allow anyone to produce such content, democratizing abuse and multiplying victims. Public figures celebrities, journalists, and activists are frequent targets. In 2024, fabricated sexual images of Taylor Swift circulated widely, sparking global outrage and renewed calls for reform.[5] Victims often find themselves powerless against the speed and reach of dissemination. Even when content is removed, traces remain archived or reposted. For women in leadership positions, the threat of deepfake abuse discourages participation in public life, echoing the silencing effects of offline harassment.

2.2 Impacts of TFGBV

The consequences of deepfakes are multifaceted:

  • Psychological: Victims experience anxiety, depression, and trauma akin to those caused by physical assault.
  • Social: Reputational damage leads to ostracism, broken relationships, and mistrust.
  • Political: Female politicians and journalists are disproportionately targeted, undermining democratic participation. A study shows that 73% of female journalists have faced online abuse.[6]
  • Economic: Survivors may lose employment opportunities or face extortion, with cases rising from 10,700 in 2022 to 26,700 in 2023.[7]

These harms reinforce gender hierarchies, replicating the coercive impact of GBV but amplified by technology’s permanence and global reach.

2.3 Legal and Policy Gaps

Legal frameworks vary widely, but most lag behind the realities of AI-driven abuse. Ethiopia’s Personal Data Protection Proclamation No. 1321/2024 prohibits unauthorized biometric manipulation, implicitly covering facial deepfakes, but it omits voice and synthetic content.[8] South Africa’s Protection of Personal Information Act 2013 provides broader protection but still lacks explicit reference to deepfakes.[9]

By contrast, the European Union’s Artificial Intelligence Act 2024/1689 requires disclosure of deepfakes and imposes risk assessments for high-risk AI applications, while the United Kingdom’s Online Safety Act 2023 criminalizes non-consensual intimate image dissemination, including AI fabrications.[10] In the United States, federal initiatives such as the TAKE IT DOWN Act 2023 complement state-level measures like California’s AB 602, which specifically targets pornographic deepfakes.[11]

Courts in other jurisdictions have already begun grappling with image-based abuse facilitated by digital tools. For instance, in R v Bowden [2021] EWCA Crim 375, the English Court of Appeal upheld a conviction where manipulated sexual images were created and distributed, recognizing such conduct as criminal exploitation even without physical contact. Similarly, in the United States, United States v Schein 31 F.4th 819 (8th Cir 2022) treated the use of manipulated intimate images as a form of cyberstalking, demonstrating how existing legal frameworks can be adapted to address emerging harms. These cases highlight a comparative gap, as Ethiopia’s current laws do not yet provide explicit remedies for AI-generated sexual abuse or deepfake content.

Courts have also begun to grapple with these issues. In S.D. v N.B. (New Hampshire Supreme Court, 2023), the court held that fabricated AI images of the petitioner in violent sexual scenarios constituted actionable harassment, rejecting free expression defenses.[12] Similarly, in Kamya Buch v Anonymous Individuals (Delhi High Court, 2025), the court ordered social media platforms to remove AI-generated pornography of an activist, affirming privacy rights and platform accountability.[13] Despite these developments, significant gaps remain. In Africa, fewer than 20% of jurisdictions have AI-specific laws, and even where statutes exist, enforcement is weak. Underreporting and slow takedown processes further erode protections. Survivors continue to face a system ill-equipped to match the pace of technological change.

Way Forward

To address TFGBV effectively, jurisdictions must explicitly recognise it as a form of GBV. Criminal laws should be updated to include AI-generated sexual content, drawing on examples such as Indonesia’s electronic transactions law, which criminalizes non-consensual digital abuses.[14] Platforms must assume stronger responsibilities by watermarking AI-generated media and ensuring rapid takedown procedures.

Beyond law, public education is key. Digital literacy empowers youth to resist online misogyny, while survivor-centred support ensures safety and justice. Given TFGBV’s cross-border nature, UN cooperation and adapted EU-style regulation can guide Ethiopia. Legal reform, platform accountability, and cultural change together build safer digital spaces.

Conclusion

Deepfakes and AI-generated sexual content represent one of the most dangerous frontiers of TFGBV. Far from being trivial digital pranks, they inflict tangible harm on women’s dignity, safety, and participation in society. The impacts psychological trauma, reputational damage, silencing of voices, and economic losses mirror and amplify the harms of traditional GBV. Legal and policy responses remain fragmented and incomplete. Some jurisdictions have taken bold steps, but most, particularly in Africa, have yet to adapt. Unless urgent reforms are implemented, deepfake sexual abuse risks becoming a widespread, normalized form of GBV. The challenge ahead is not only technical or legal but also cultural. Recognizing TFGBV as real violence, imposing accountability on platforms, and empowering survivors are necessary steps to safeguard human rights in the digital age. Without decisive action, society risks allowing the next major wave of GBV to flourish unchecked in cyberspace.

Bibliography

Primary Sources

Cases

  • Justice KS Puttaswamy v Union of India [2017] 10 SCC 1 (SC India)
  • Kamya Buch v Anonymous Individuals and Others [2025] Delhi HC (India)
  • S.D. v N.B. (New Hampshire Supreme Court, 2023)
  • R v Bowden [2021] EWCA Crim 375 (CA)
  • United States v Schein 31 F.4th 819 (8th Cir 2022)

Legislation and Constitutions

  • Artificial Intelligence Act (EU) 2024/1689
  • California AB 602 (2019)
  • Computer Misuse Act 2022 (Uganda)
  • Constitution of India, art 21
  • Constitution of the Federal Democratic Republic of Ethiopia, art 26
  • Crimes Amendment (Non-Consensual Intimate Images) Bill 2025 (NSW, Australia)
  • Danish Copyright Act Amendments 2025
  • Indonesian Law No 11/2008 on Electronic Information and Transactions (as amended)
  • New York RAISE Act 2025 (US)
  • No FAKES Act 2025 (US)
  • Online Safety Act 2023 (UK)
  • Personal Data Protection Proclamation No 1321/2024 (Ethiopia)
  • Protection of Personal Information Act 4 of 2013 (South Africa)
  • TAKE IT DOWN Act 2023 (US)
  • Texas SB 76 (2019)

Secondary Sources

Articles

  • Danielle K Citron, ‘The Fight Against Deepfake Pornography’ (2024) 67(2) Yale Law Journal 456
  • Mary Anne Franks, ‘The Desert of the Unreal: Deepfakes and the Law’ (2023) 56(4) Stanford Law Review 789
  • Juliana Rincón and others, ‘Threats and Regulatory Challenges of Non-Consensual Pornographic Deepfakes’ (2025) Cogent Social Sciences

Reports

[1] UN Women, ‘Repository of UN Women’s Work on Technology-Facilitated Violence Against Women and Girls’ (2025) https://www.unwomen.org/en/digital-library/publications/2025/03/repository-of-un-womens-work-on-technology-facilitated-violence-against-women-and-girls

[2] Georgetown Institute for Women, Peace and Security, ‘Technology-Facilitated Gender-Based Violence’ (2024) https://giwps.georgetown.edu/wp-content/uploads/2024/06/Technology-Facilitated-Gender-Based-Violence.pdf

[3] Columbia Institute on Global Politics, ‘Mainstreaming Responses to Technology-Facilitated Gender-Based Violence’ (2024) https://igp.sipa.columbia.edu/sites/igp/files/2024-09/IGP_TFGBV_Its_Everyones_Problem_090524.pdf

[4] Sensity AI, ‘The State of Deepfakes 2024’ (2024) https://sensity.ai/reports/

[5] Sensity AI (n 4).

[6] UK Government, ‘Digital Violence, Real World Harm’ (2025) https://assets.publishing.service.gov.uk/media/68a5697a0e26ebf0d8fb10cf/Digital_violence_real_world_harm_evaluating_survivor_centric_tools_for_intimate_image_abuse_in_the_age_of_generative_AI_8_18.pdf

[7] Ibid

[8] Personal Data Protection Proclamation No. 1321/2024 (Ethiopia)

[9] Protection of Personal Information Act 4 of 2013 (South Africa).

[10]Artificial Intelligence Act (EU) 2024/1689; Online Safety Act 2023 (UK).

[11]TAKE IT DOWN Act 2023 (US); California AB 602 (2019).

[12] S.D. v N.B. (2023) NHSC (New Hampshire).

[13] Kamya Buch v Anonymous Individuals and Others [2025] Delhi HC (India).

[14] Indonesian Law No. 11/2008 on Electronic Information and Transactions (as amended).

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top