Home » Blog » CROSS-BORDER LEGAL ANALYSIS: HUMAN RIGHTS MORAL FOUNDATIONS ANDAI REGULATIONS

CROSS-BORDER LEGAL ANALYSIS: HUMAN RIGHTS MORAL FOUNDATIONS ANDAI REGULATIONS

Authored By: Yoanna Koleva

ABSTRACT

Artificial intelligence (AI) is reshaping global governance, human rights protections, and the moral foundations of modern society. As deepfakes, algorithmic discrimination, cross‑border cybercrime, and child exploitation proliferate, states struggle to regulate technologies that transcend national boundaries. This article provides a comparative analysis of emerging AI regulations in the United States, European Union, United Kingdom, China, and South Korea, highlighting global trends such as platform accountability, mandatory labeling, and protections against non‑consensual intimate imagery (NCII). It further examines the moral tensions between utilitarian AI decision‑making and human ethical frameworks, emphasizing cross‑cultural variations in moral expectations. The article concludes with a critical evaluation of current regulatory strengths, weaknesses, and gaps, arguing for a  human‑centric, culturally aware global governance model to safeguard digital humanity.

INTRODUCTION

Artificial intelligence has become the defining force of the 21st century, influencing decisions that affect personal autonomy, public safety, and global order. As AI systems increasingly mediate human interactions—through surveillance, automated decision‑making, and content generation—the world faces a profound legal and ethical dilemma: Who controls the moral compass of AI?

The rise of deepfakes, cross‑border cybercrime, child exploitation, and algorithmic bias expose the exposure of existing legal frameworks. Legislators must now craft regulations that protect human dignity while enabling innovation.

MORAL DILEMMAS OF AI: FROM THE TROLLEY PROBLEM TO DIGITAL HARM

The classic “Trolley Problem” has become a real‑world challenge in the age of autonomous systems. AI now makes decisions where human lives and rights are at stake, including autonomous vehicles, predictive policing, and border surveillance.

Research on the foreign language effect demonstrates that individuals make more utilitarian decisions when reasoning in a non‑native language. ¹ AI, which lacks emotional intuition, amplifies this effect: it tends to choose the mathematically optimal outcome even when humans would prioritize empathy or fairness.

Deepfake technology, child exploitation of imagery, and algorithmic manipulation create moral dilemmas that no machine can fully comprehend. Legislators must base-code humans-oriented values into AI systems before delegating life‑altering decisions to them.

GLOBAL LEGAL RESPONSES TO AI‑DRIVEN HARM

AI regulation is created and enforced everyday but remains insufficient. Below is a comparative overview of major jurisdictions.

United States

  • TAKE IT DOWN Act (2025) The first federal law requiring platforms to remove non‑consensual intimate deepfakes within 48 hours.²
  • State Legislation By 2026, 46 states have enacted laws targeting sexual deepfakes, political manipulation, and unauthorized AI‑generated likenesses.³
  • Pending Federal Bills
  • DEFIANCE Act (2025): Enables civil suits up to $250,000.⁴
  • NO FAKES Act (2025): Protects against unauthorized voice and likeness cloning.⁵

European Union

  • EU AI Act (2024/2025) The world’s first AI regulation, implementing mandatory labeling of AI‑generated content (Art. 50) and fines up to 6% of global turnover.⁶
  • Digital Services Act (DSA) Requires platforms to mitigate risks from manipulated media and remove illegal content.⁷

Denmark

(2025 Amendment) Grants citizens the right to demand removal of unauthorized AI imitations, extending protection 50 years post‑mortem.⁸

United Kingdom

Online Safety Act (2023/2025) Criminalizes creation or request of intimate deepfake images; penalties up to two years imprisonment.⁹

China

Synthetic Content Measures (2025) Mandates visible and invisible watermarking and prohibits alteration of AI watermarks.¹⁰

South Korea

Deepfake Criminalization (2023–2024) Up to seven years imprisonment for creators; criminal liability for possession or viewing of NCII deepfakes.¹¹

GLOBAL REGULATORY TRENDS

Platform Accountability

Governments hold platforms liable for failing to remove harmful AI content within strict timeframes.

Mandatory Labeling AI metadata and watermarking are becoming global standards.

Focus on NCII With 96% of deepfake videos being non‑consensual and sexualized, regulators prioritize victim protection.¹²

MORAL AND ETHICAL FOUNDATIONS OF AI

AI ethics frameworks emphasize trustworthiness, transparency, fairness, and human‑centric design. Yet global moral expectations differ significantly.

Cross‑Cultural Variations in Moral Reasoning

Western (WEIRD) cultures: Prioritize individual rights and harm‑based ethics.¹³

Non‑Western cultures: Focuscommunity, authority, and collective well‑being.

AI trained on Western‑dominated datasets risks imposing a form of digital neo‑colonialism.¹⁴

AI vs. Human Moral Decision‑Making

AI = Utilitarian Optimizes outcomes mathematically.

Humans = Deontological Follow moral rules (“do not kill,” “respect dignity”).

AI lacks moral agency and cannot experience empathy.¹⁵

The Accountability Gap

When AI causes harm, responsibility becomes unclear:

Developer/Platform/User/The AI system itself.

This “responsibility trap” is one of the most urgent legal challenges of the decade.¹⁶

INTERNATIONAL LAW AND GOVERNANCE

Council of Europe Framework Convention on AI (2024)

The first binding treaty ensuring AI respects human rights, democracy, and the rule of law.¹⁷

UN Global Digital Compact (2024)

Adopted by 193 nations, promoting safe digital spaces and aligning AI with international legal standards.¹⁸

CRITICAL ANALYSIS: STRENGTHS, WEAKNESSES, AND GAPS

Humanity has always seek to automate labor—from mechanization to electrification to the digital revolution. Each technological leap brought disruption, but AI represents a fundamentally different transformation. It challenges not only economic structures but the very essence of human agency.

Strengths:

Weaknesses:

Gaps:

Growing global consensus on transparency

Fragmented national laws

No global definition of AI‑related harm

Stronger protections against deepfake exploitation

 

Lack of universal moral standards

Weak enforcement mechanisms

 

 

No unified liability framework

 

Increasing recognition of cross‑border digital harm

 

Overreliance on platform self‑regulation

Fragmented national laws 

 

Limited protection for vulnerable groups

 

 

Lack of universal moral standards

Weak enforcement mechanisms

Overreliance on platform self‑regulation

Insufficient cultural diversity in AI training data

 

We stand at a turning point. Mistakes made now will echo for generations. The cost of inaction—or misguided action—will not be paid by us alone, but by our children.

A CALL FOR A GLOBAL FRAMEWORK FOR COMMON FRAMEWORKS:

More Than the Fractured National Approach. And yet even with fast-moving regulations, the world of AI governance remains radically disjointed to the point of significant enforcement challenges and jurisdictional confusion. AI systems can span borders, but legal responsibility remains confined within national borders. Such a divergence of technological reality and legal geography results in what academics call a “regulatory patchwork,” one in which protection differs dramatically depending on where a user is located. ¹⁹ Such as how a deepfake created under a jurisdiction that doesn’t have (or has no) strong AI laws can spread all around the world in minutes, going to victims in countries where there are strong protections but can’t make them stick extraterritorially. Also, the lack of harmonised standards makes for “AI haven shopping,” on which developers operate in countries with little oversight to get around the EU AI Act and tougher environments. ²⁰. Moreover, the fact that there is no interoperability among regulators can weaken the impact of even the most modern national laws. The EU AI Act has robust transparency and risk-based requirements, however, such requirements lose force when AI models trained or deployed outside the EU are imported into its digital field. The United States’ sector‑specific nature, which places much emphasis on state‑level legislation, also contributes to internal inconsistencies, stymieing cross‑border cooperation. ²¹ China’s commitment to state regulation and compulsory watermarking points to a wholly different approach to regulation, and how competing political values affect the global infrastructure of AI could be the subject of some questions. ²². An international standard would not need the same laws, but minimum global norms, similar to the Paris Agreement, that provide protection for basic rights, like privacy, dignity and human rights. These might include shared definitions of AI‑related harm, coordinated enforcement mechanisms and common legal assistance treaties for digital crimes. ²³ Without this, victims of deepfakes, algorithmic discrimination or AI‑driven trafficking will continue to encounter overwhelming obstacles to reaching justice. The Council of Europe’s Framework Convention on AI and the UN Global Digital Compact are promising first steps but will only be as effective as widespread ratification and meaningful implementation. ²⁴ Ultimately, AI governance’s future rests in the world’s ability to move beyond fragmented national approaches and embrace a cooperative, human‑centred global strategy that addresses the collective vulnerabilities represented in the digital humanity of the global society.

One of the most urgent challenges of AI governance is cross‑border enforcement. Unlike the traditional world, AI systems exist in an open digital space where data, algorithms and potentially harmful content can get built in one jurisdiction, hosted in another and consumed around the world. This causes a dramatic gap between the territorial character of law and the transnational character of AI‑enabled harm. ²⁵ Traditional concepts of jurisdiction,— territoriality, nationality and the protective principle—struggle with the speed and scale with which AI‑produced information can circulate. For example, a deepfake produced in a nation with a weak regulatory net can also be spread worldwide in seconds – and not many victims have the chance to sue the perpetrators or networks across foreign borders. ²⁶ Compounded the issue is inconsistent national standards. The EU AI Act places heavy transparency and risk‑management requirements, the USA takes a piecemeal, sector‑driven approach. China’s regulatory model has a state‑led, mandatory watermarking model, while South Korea criminalises the mere possession of non‑consensual deepfake material. ²⁷ Such competing attitudes create legal uncertainty for large multinationals and complicate joint enforcement. An app with jurisdiction may not adhere to a common set of obligations, such as takedown deadlines, labeling regulations, and standards of liability. ²⁸ An emerging concern is the AI criminal liability issue. Given the autonomous capabilities of AI systems, scholars question whether human traditional criminal law doctrines—mens rea, causation, foreseeability, etc.—can solve AI‑based harms. ²⁹ Some say that the potential harms of AI should be perceived as the product of a tool or an instrument with responsibility to the developer, developer of technology or as an application or on the side of user. Still others offer legal categories like “electronic personhood” or “algorithmic agents,” but the concepts are still controversial and threaten to erode who will bear the costs or liabilities of human responsibility. ³⁰ The key is who should be held responsible when something is not a predictable or clearly intended action performed by the creator of an AI system, but if it acts in ways not explicitly intended by him or her. There are further challenges in cross‑border criminal investigations. Mutual legal assistance treaties (MLATs) are infamously slow, waiting many months to answer requests for digital evidence. ³¹ In the meantime, damaging AI content can be copied, amended, and shifted thousands of times before authorities intervene. The Budapest Convention on Cybercrime aims to create a common framework for partnerships but it does not mention AI-generated harms, such as deepfakes, synthetic identity fraud or algorithmic bias. ³² In order to overcome these lacunae, academic and policy advocates are turning their attention to a worldwide AI enforcement mechanism likely in the manner of existing international bodies, i.e. the INTERPOL or the International Telecommunications Union. ³³ Without this kind of cooperation, AI‑enabled crimes will continue on exploiting jurisdictional loopholes, leaving victims unprotected and perpetrators beyond the ambit of national law. However, AI governance not only depends largely on whether we can achieve strong domestic legislation for the future however the world is at least capable of creating universal inter-governmental standards together on an interoperable basis: ones that can be enforced and that are human‑centred. As AI systems become more embedded in critical infrastructures, judicial processes and personal lives; the need for coordinated global action will be more than wishful thinking for protecting the digital commons now.

CONCLUSION

THE FUTURE OF AI REGULATION REQUIRES:

 

Context‑aware, culturally sensitive AI.

Strong international cooperation.

Human‑in‑the‑loop oversight.

Ethical education for technologists.

A global commitment to protecting digital humanity.

 

The legal frameworks we build today will determine not only how machines behave—but what kind of society we choose to become.

 AI is not just a technological tool. It is a mirror reflecting our moral choices, cultural values, and legal priorities. As nations race to regulate AI, the world must confront a fundamental question: How do we preserve human dignity in an age where machines increasingly shape our reality?

Nowerdays, humanity is on the break of the absolute freedom… Or at least this is the illusion … is it possible for the creation to become the master of the creators … to turn them into slaves?

OSCOLA FOOTNOTE(S):

Albert Costa and others, ‘Your Morals Depend on Language’ (2014) 25 Psychological Science 1.

² TAKE IT DOWN Act 2025 (US).

³ Cyber Civil Rights Initiative, ‘State Deepfake Laws’ (2026).

⁴ DEFIANCE Act 2025 (US Congress, pending).

⁵ NO FAKES Act 2025 (US Senate Draft Bill).

⁶ Regulation (EU) 2024/… Artificial Intelligence Act.

⁷ Regulation (EU) 2022/2065 Digital Services Act.

⁸ Danish Ministry of Justice, AI Amendment Act 2025.

⁹ Online Safety Act 2023 (UK), as amended 2025.

¹⁰ Cyberspace Administration of China, ‘Provisions on the Administration of Deep Synthesis Internet Information Services’ (2025).

¹¹ South Korean Criminal Act (Amendment 2024).

¹² Sensity AI, ‘Deepfake Landscape Report’ (2023).

¹³ Joseph Henrich, Steven Heine and Ara Norenzayan, ‘The Weirdest People in the World?’ (2010) 33 Behavioral and Brain Sciences 61.

¹⁴ Abeba Birhane, ‘Algorithmic Colonization of Africa’ (2020) 7 SCRIPTed 389.

¹⁵ Luciano Floridi and Josh Cowls, ‘A Unified Framework of Five Principles for AI in Society’ (2019) Harvard Data Science Review.

¹⁶ Brent Mittelstadt, ‘The Ethics of Algorithms’ (2016) Big Data & Society.

¹⁷ Council of Europe, Framework Convention on Artificial Intelligence (2024).

¹⁸ United Nations, ‘Global Digital Compact’ (2024).

¹⁹ Mireille Hildebrandt, Law for Computer Scientists and Other Folk (OUP 2020).

²⁰ Lilian Edwards, ‘Regulating AI: The EU’s Artificial Intelligence Act’ (2022) 45 Computer Law & Security Review 105.

²¹ Woodrow Hartzog, Privacy’s Blueprint: The Battle to Control the Design of New Technologies (Harvard UP 2018).

²² Rogier Creemers, ‘China’s Social Credit System: An Evolving Practice of Control’ (2020) Journal of Contemporary China Studies.

²³ United Nations Office on Drugs and Crime (UNODC), ‘Model Legislative Provisions on Cybercrime’ (2021).

²⁴ Council of Europe, Framework Convention on AI (2024); United Nations, ‘Global Digital Compact’ (2024).

²⁵ Dan Svantesson, Solving the Internet Jurisdiction Puzzle (OUP 2017).

²⁶ Evelyn Douek, ‘Deepfakes and the Law: A Cross‑Border Enforcement Crisis’ (2021) 68 UCLA Law Review 102. ²⁷ Lilian Edwards, ‘Regulating AI’ (2022) 45 Computer Law & Security Review 105.

²⁸ Jack Balkin, ‘The Path of Robotics Law’ (2015) 6 California Law Review Circuit 45.

²⁹ Gabriel Hallevy, When Robots Kill: Artificial Intelligence under Criminal Law (Nijhoff 2013).

³⁰ Ugo Pagallo, The Laws of Robots: Crimes, Contracts, and Torts (Springer 2013).

³¹ UNODC, ‘Practical Guide to MLATs’ (2020).

³² Council of Europe, Budapest Convention on Cybercrime (2001).

³³ Andrew Murray, ‘Towards a Global Framework for AI Enforcement’ (2023) 39 Computer Law & Security Review 105.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top