Authored By: Ompha Ndou
Introduction: The Internet’s Irreversible Shift on the Rule of Law
The emergence of globally connected borderless information technology has fundamentally restructured the relationship between human conduct and the legal frameworks designed to govern it. Information Technology Law, also known as Cyber Law or ICT Law, has solidified as a specialized and highly hybridized field that must reconcile the inherent possibilities of computing, artificial intelligence, software, and virtual worlds with existing juridical regulation.1 This field is characterized by its reliance on elements from diverse established branches of law, including criminal law, contract law, intellectual property, and crucial fundamental rights such as the right to self-determination, privacy, and freedom of expression. The central thesis of modern cyber jurisprudence is that the global, borderless nature of digital technology has exposed profound limitations in traditional, geographically bounded legal systems. This inadequacy has forced a reactive legal adaptation that is currently diverging significantly across major jurisdictions, creating substantial complexity in criminal enforcement and the governance of increasingly autonomous systems.
Historical Context and Regulatory Lag
The legal evolution of internet regulation did not begin with immediate government oversight. Early discourse favoured a model of self-regulation.2 However, this early expectation was rapidly replaced by increasing state control and the criminalization of specific digital conduct, a shift accelerated dramatically following global security events such as the terrorist attacks in the USA on 11 September 2001. Governments increasingly justified state control of information and access as a necessary law enforcement and national security tool to combat serious online crimes, including terrorism, phishing, child pornography, and copyright infringement.
This necessary adaptation established a foundational tension that defines modern cyber law: balancing the state’s need for effective criminal justice response and national security tools against the defence of civil rights, fundamental liberties, and the free flow of information. International efforts to regulate this space, such as the Budapest Convention, explicitly recognize this dual mandate, seeking to reconcile the vision of a free internet with the necessity of an effective criminal justice response in cases of criminal misuse, always subject to human rights and rule of law safeguards.3 The need for the legal field to adapt existing concepts— such as tort, property, and contract law—to manage novel digital harms suggests that because technology rapidly outpaces new statutory creation, the result is often a complex and sometimes incoherent legal patchwork that requires continuous harmonization and review.
From Analogue Trespass to Digital Crime: The Evolution of Cyber Offenses
The Crisis of Analogue Jurisdiction: The CMA’s Genesis
Before dedicated legislation, traditional criminal statutes proved wholly inadequate for prosecuting intangible and remote digital intrusions. A pivotal moment demonstrating this systemic failure occurred in the United Kingdom with the case of R v Gold and Schifreen (1987).4 The defendants gained unauthorized access to the British Telecom Prestel service, even accessing the voicemail account of Prince Philip, the Duke of Edinburgh. While the duo was initially convicted under the Forgery and Counterfeiting Act 1981, this conviction was later overturned on appeal. The court explicitly found that gaining access to the data bank dishonestly “by a trick” was not a criminal offence under existing law, stating that if it were desirable to make such remote access illegal, it was a matter for the legislature rather than the courts.5 This judicial impotence directly exposed a critical legal loophole and underscored a crucial legal principle: the definition and creation of new crimes must be reserved for the legislature. The vacuum created by this judgment served as the direct causal link leading to the creation of the Computer Misuse Act 1990 (CMA).
The Computer Misuse Act 1990: Defining the Digital Intrusion6
The CMA 1990 was introduced as a targeted, reactive legislative effort to define and criminalize digital trespass. It successfully established core offenses necessary to address hacking and computer virus-related crimes, including ransomware. The Act conceptually utilizes digital equivalents of traditional physical crimes, linking unauthorized access to trespass, and aligning further offenses with concepts similar to burglary.7 The CMA created three primary offences, which remain the foundation of UK cybercrime prosecution:
- Section 1: Unauthorised access to computer material. This covers the basic act of digital trespass, where an individual intentionally accesses computer material without authorization.
- Section 2: Unauthorised access with intent to commit or facilitate commission of further offences. This offense carries a higher penalty and covers situations where unauthorized access is gained with the specific aim of committing a serious related crime, such as diverting funds (theft) or gaining access to sensitive information with a view to blackmailing the related person.
- Section 3: Unauthorised modification of computer material. This addresses the malicious alteration of data or programs, covering the deployment of viruses, ransomware, or other means of causing damage.
The Imperative for Modern Reform and Public Interest Defences
Despite its importance, the CMA’s core structure has remained largely unchanged for over three decades. This legislative lag means the Act is increasingly ill-equipped to handle the scale and complexity of modern cyber threats originating from sophisticated cybercriminals and hostile nation states. A critical review led by academic lawyers has argued that the CMA is “crying out for reform”.8 One major deficiency is the lack of public interest defenses for actions that technically violate the CMA, such as vulnerability research, which often requires unauthorized access to identify and remediate security flaws. The absence of a statutory public interest defence creates a regulatory chilling effect on legitimate cybersecurity professionals: necessary defensive activity, by definition, risks prosecution under the CMA. This implicit threat can stifle domestic cybersecurity innovation, delay vulnerability disclosure, and ultimately jeopardize national critical infrastructure. Furthermore, the lack of clear guidance for prosecutors and judges often leads to “draconian outcomes,” particularly for young people involved in early-stage hacking activities. In terms of future policy, the UK is urged to create defensive rules and not follow the example of US legislators who have considered legalizing “hack back” laws that permit firms to retaliate to online attacks.
Global Complexity: Jurisdictional Fault Lines and International Enforcement
The Fundamental Challenge of Sovereignty in Cyberspace
The defining characteristic of cybercrime is its transnational nature. Offenses such as cyber extortion and blackmail inherently occur across borders, meaning the victim and perpetrator are often geographically separated.9 Since cyberspace ignores traditional physical borders, this phenomenon leads to severe legal fragmentation and conflicts of law, making it immensely complex to determine which nation’s laws apply, thereby impeding effective investigation and prosecution. This challenge is further complicated by severe operational obstacles. Law enforcement faces continuous difficulties in obtaining extradition treaties with every relevant country. Moreover, cybercriminals frequently utilize sophisticated technical measures such as encryption and anonymization techniques to mask their identities and activities, greatly complicating detection and tracking efforts.10
The Crisis of Electronic Evidence
The difficulty in establishing jurisdiction is compounded by the procedural challenges inherent in electronic evidence collection. Many countries impose strict regulations concerning the transfer of digital evidence across their borders, creating significant data sovereignty issues that obstruct international investigations. Gathering digital evidence from foreign servers requires mutual legal assistance and international cooperation, which is frequently hindered by fundamental differences in legal systems, procedural rules, and privacy standards between jurisdictions. The complexity involved in cross-jurisdictional data access means that maintaining a clear and unbroken chain of custody for digital evidence is extremely difficult when multiple nations are involved.11 This procedural weakness often jeopardizes the evidence’s admissibility in court, further complicating efforts to bring cybercriminals to justice. These technical and legal variances confirm that absolute territorial sovereignty has failed in the domain of cyberspace; the success of domestic cybercrime enforcement is heavily reliant on geopolitical stability and international treaty adherence, transforming cyber law enforcement into a function of international diplomacy.
The International Solution: The Budapest Convention on Cybercrime
Recognizing the failures of strictly territorial law, the Council of Europe developed the Budapest Convention on Cybercrime.12 Opened for signature in 2001, it stands as the first multilateral legally binding instrument specifically designed to address crime committed against and by means of computers. The Convention represents an attempt to establish a functional form of jurisdiction based on shared definitions of prohibited conduct and standardized procedural mechanisms for gathering evidence. Its main pillars are:
- Harmonization of Substantive Criminal Law: Defining key offenses in cybercrime, including illegal access, data interference, systems interference, and computer-related fraud.
- Procedural Law Tools: Providing common domestic procedural tools necessary for the investigation and prosecution of such offenses, and for securing electronic evidence. 3.
- International Cooperation: Setting up a fast and effective regime for international mutual assistance.
A critical feature that accounts for the Convention’s longevity and broad applicability is its technology-neutral language. This allows its provisions to be applied to a wide array of emerging threats, encompassing botnets, Distributed Denial of Service (DDOS) attacks, malware, identity theft, and attacks on critical infrastructure. Crucially, the Convention maintains an explicit mandate to protect human rights and liberties while ensuring effective criminal justice. However, the framework’s efficacy is limited by global geopolitical realities. The success of international cooperation is compromised by the refusal of major nations, such as Russia, to ratify the Convention, often citing concerns about sovereignty, thereby blocking mutual cooperation in specific law enforcement investigations.
Governing the Algorithmic Future: Ethical AI and Divergent Regulation
The proliferation of Artificial Intelligence (AI) systems presents novel legal challenges, particularly concerning ethics, fairness, and accountability, necessitating entirely new legal instruments beyond the scope of traditional ICT law.13
Core Ethical Challenges and the Liability Vacuum
AI introduces significant ethical dilemmas rooted in its function and societal impact. First, algorithmic bias and discrimination pose a pervasive threat. If AI systems are trained on incomplete or skewed data, they can perpetuate and automate systemic bias, potentially resulting in the profiling or unfair targeting of specific demographic groups.14 For example, a cybersecurity tool trained with biased data might disproportionately flag legitimate software used primarily by a specific cultural group as malicious, creating ethical concerns regarding discrimination and unjust consequences. Second, the socio-economic impact of AI automation is unavoidable.
As AI takes over routine tasks, such as automated incident response and routine threat detection, there is a legitimate ethical concern regarding large-scale job displacement within industries like cybersecurity. This requires society to proactively manage the consequences of potential job losses and ensure mechanisms for workforce retraining and transition. Finally, the liability vacuum emerges because traditional product liability and fault-based regimes struggle to assign accountability for harms caused by autonomous, opaque, and continuously evolving AI systems.15 The complexity of assigning fault when an algorithm autonomously learns and makes decisions has necessitated the creation of new legal frameworks.
The European Union: The Statutory Risk-Based Standard
The European Union has established a prescriptive, legally binding framework for AI governance known as the EU AI Act, which classifies systems based on their potential to cause harm, thereby functioning as a powerful global regulatory standard-setter (often termed the ‘Brussels Effect’).16 The Act classifies AI into four categories of risk:
- Unacceptable Risk: Systems deemed to pose a clear threat to fundamental rights are prohibited (e.g., real-time social scoring systems or highly manipulative AI). ● High Risk: These systems, used in critical infrastructure, law enforcement, employment, or sensitive domains, are heavily regulated and subject to strict conformity assessments and regulatory obligations.
- Limited Risk: These systems are subject to mandatory transparency obligations, requiring developers and deployers to ensure end-users are aware they are interacting with an AI (e.g., chatbots or deepfakes).
- Minimal Risk: The majority of AI applications, such as spam filters, remain unregulated. To address the inherent challenge of dynamic liability, the European Commission has proposed the AI Liability Directive (AILD) and revisions to the existing Product Liability Directive (PLD). The AILD specifically assists claimants in making non-contractual fault-based claims related to damage caused by unlawful discrimination or a breach of safety rules embedded within an AI system’s algorithms.
The framework critically acknowledges that AI is often a dynamic service, not a static product. Manufacturers or providers are mandated to remain liable for defects that arise after the system has been placed on the market, particularly if those defects result from the AI system’s “ability to continuously learn” or from subsequent software upgrades. This approach fundamentally rewrites product liability rules by imposing a duty of maintenance and monitoring that extends the legal timeline of responsibility beyond the initial sale, directly addressing algorithmic autonomy. To facilitate justice, courts can order the disclosure of evidence relating to suspected high-risk AI systems that are alleged to have caused damage, provided the claimant can demonstrate the plausibility of the claim.
Divergent Global Approaches: Flexibility vs. Prescription
In contrast to the EU’s highly prescriptive, statutory framework, other major jurisdictions have opted for more decentralized or principles-based approaches.
United States
The US favors a decentralized, sector-specific regulatory strategy.17 Unlike the EU Act, which is a legally binding horizontal framework, the US approach is guided primarily by federal agency oversight, executive initiatives (such as the National Artificial Intelligence Initiative Act of 2020), and voluntary commitments from private companies. This fragmented regulatory environment aims to balance innovation with ethical considerations and risk management, although it creates challenges in achieving legal uniformity.
United Kingdom
The UK has historically prioritized a flexible, non-statutory “principles-based framework,” relying on existing sector-specific regulators to interpret and apply overarching principles to AI within their respective domains. This approach was favored for its “critical adaptability” to the rapid pace of technological change. However, there is a recognized shift, with recent proposals indicating an intention to establish binding legislation that places specific requirements on those developing the most powerful AI models, suggesting a move toward a statutory duty on regulators to enforce the framework. Furthermore, UK law links digital activity to environmental sustainability; the Streamlined Energy and Carbon Reporting framework (SECR) requires large companies that offer AI services or host data centers to report their energy use and greenhouse gas emissions.18
The Sky is Not the Limit: Regulating Drones, Airspace, and Privacy
The regulation of Unmanned Aerial Systems (UAS), or drones, presents a unique challenge, forcing the legal system to reconcile ancient common law concepts of property ownership with modern aviation and data protection standards.19
The Death of Ad Coelum and Modern Airspace Rights
Historically, the common law doctrine of cujus est solum, ejus est usque ad coelum dictated that ownership of land extended to the periphery of the universe. Landowners were traditionally said to own the airspace “up to the heavens.” However, the advent of aircraft and, more recently, small commercial and consumer drones, made this absolute doctrine untenable. Courts have since rejected this expansive claim, holding that property owners’ rights extend only to the “superadjacent” airspace, or “at least as much of the space above the ground as they can occupy or use in connection with the land”. Building on this refinement, the Civil Aviation Act 1982, s 76, generally permits lawful aircraft flight unless the altitude is so low that it interferes with the existing use of the land or poses an imminent danger.20
Judicial Intervention: Trespass, Nuisance, and Functional Interference
Modern caselaw confirms that the simple act of flying a drone over private land is not automatically considered trespass; instead, courts scrutinize the use to which the drone is put and whether it functionally interferes with the landowner’s reasonable enjoyment and rights. The ruling in Anglo International Upholland Ltd v Wainwright & Persons Unknown demonstrated this focus.21 The court concluded that while simple flight may not be trespass, flying a drone at a height from which videos and photographs were taken, which were subsequently used to encourage and facilitate further physical trespasses, was sufficient to constitute actionable trespass.
This led to an interim injunction banning drone flights over the site. Similarly, in MBR Acres Ltd v Curtin, where protestors flew drones over a site for animal breeding, the court granted an injunction.22 The ruling suggested that the flight path, which was low enough (below 100 metres) to provide clear recordings, could not be considered “reasonable” under the CAA 1982 and therefore constituted trespass. These rulings show that courts are effectively prioritizing the protection of the privacy interest—the ability to enjoy land without being monitored—over the strict physical definition of airspace, effectively fusing property law with data protection principles.
The Intersection with Privacy and Data Protection
Camera-equipped drones inherently intersect with data protection law, particularly the UK General Data Protection Regulation (GDPR).23 If a drone is fitted with a camera or listening device, operators must rigorously respect the privacy of others. It is highly likely that using these devices to record or photograph activities in spaces where people expect privacy, such as inside their homes or gardens, constitutes a breach of data protection laws. Furthermore, the law explicitly prohibits taking photographs or recordings for criminal or terrorist purposes.
Global Regulatory Divergence in Aviation Safety
Global drone regulation has split into two primary models, reflecting different philosophies on managing future technological risks. The United States, through the FAA’s Part 107, adopted a simpler, operator-centric, and waiver-dependent rule for drones under 55 pounds, which helped launch the commercial drone industry quickly. In contrast, the EU/UK framework (EASA/CAA) employs a more complex, granular, risk-based Three-Category System (Open, Specific, Certified).24 While more involved, this proactive approach is fundamentally built for a future involving autonomous operations and specialized commercial services. It creates a clear, defined path for manufacturers and operators to manage pre-approved risk profiles, demonstrating a regulatory choice focused on controlled, robust scaling for long-term complexity, rather than solely rapid market entry.
Conclusion: Synthesis and Future Regulatory Imperatives
The evolution of cyber law demonstrates an accelerating challenge to maintain the efficacy and coherence of traditional jurisprudence in the face of rapid technological disruption. The digital revolution has been defined by reactive legislation and fragmented international authority. Foundational cyber law, such as the Computer Misuse Act 1990, was a direct consequence of judicial failure in cases like R v Gold and Schifreen, acting primarily to close specific loopholes. However, the CMA’s structure now struggles to accommodate modern realities, such as the need for statutory public interest defenses for legitimate cybersecurity activity, illustrating a persistent regulatory lag. The jurisdictional challenge remains the great unsolved problem of cyberspace.
Cross-border cybercrime confirms that the success of domestic enforcement is inextricably linked to international political cooperation. Treaties like the Budapest Convention are indispensable for establishing functional jurisdiction and standardized evidence procedures but their effectiveness is curtailed by the refusal of geopolitically significant non-ratifying nations. The governance of AI and drones illustrates how legal systems are grappling with the imminent age of autonomy. The EU’s AI Act has set a powerful global precedent for proactive, statutory risk classification, forcing legal frameworks to address the critical issues of algorithmic bias and to redefine liability—shifting responsibility from a static “product” to a dynamic, learning “service”. Simultaneously, drone regulation shows how physical technology forces a modernization of ancient common law; property rights are redefined by the functional impact of technology on privacy and enjoyment, rather than strict physical boundaries.
To successfully govern the digital era, future legal frameworks must execute a necessary transition from reactive damage control to proactive, principles-based legal engineering. This requires global regulatory interoperability, technology-neutral drafting, and a fundamental anchoring in ethical considerations to protect human rights against the pervasive and rapidly evolving reach of digital and autonomous systems. Continuous collaboration between technologists, policymakers, and legal experts is essential to anticipate future harms and prevent further costly regulatory lag.
Works Cited
- The Evolution of Internet Legal Regulation in Addressing Crime and Terrorism – Scholarly Commons, https://commons.erau.edu/cgi/viewcontent.cgi?article=1022&context=jdfsl
- Key facts – Cybercrime – The Council of Europe, https://www.coe.int/en/web/cybercrime/key-facts
- The Computer Misuse Act (CMA) turns 30 years old | Fox IT, https://www.fox it.com/be/the-computer-misuse-act-cma-turns-30-years-old/
- Assessing the Seriousness of Cybercrime: The Case of Computer Misuse Crime in the United Kingdom and the Victims’ Perspective – Portsmouth Research Portal, https://researchportal.port.ac.uk/files/58162871/Manuscript_for_Pure.pdf
- Computer Misuse Act | The Crown Prosecution Service, https://www.cps.gov.uk/prosecution-guidance/computer-misuse-act
- Cybercrime laws need urgent reform to protect UK, says report – The Guardian, https://www.theguardian.com/technology/2020/jan/22/cybercrime-laws-need-urgent-reform to-pr otect-uk-says-report
- Examining Jurisdictional Challenges in Cross-Border Cyber Blackmail Cases Under Federal Law, https://federal-criminal.com/computer-crimes/examining-jurisdictional challenges-in-cross-border -cyber-blackmail-cases-under-federal-law/
- Examining Jurisdictional Challenges in Cross-Border Cyber Extortion Cases Under Federal Law – Leppard Law – Top Rated Orlando DUI Lawyers & Criminal Attorneys in Orlando, https://leppardlaw.com/federal/computer-crimes/examining-jurisdictional challenges-in-cross-bor der-cyber-extortion-cases-under-federal-law/
- Budapest Convention on Cybercrime – Wikipedia, https://en.wikipedia.org/wiki/Budapest_Convention_on_Cybercrime
- The Ethical Dilemmas of AI in Cybersecurity – ISC2, https://www.isc2.org/Insights/2024/01/The-Ethical-Dilemmas-of-AI-in-Cybersecurity
- High-level summary of the AI Act | EU Artificial Intelligence Act, https://artificialintelligenceact.eu/high-level-summary/
- Artificial intelligence and liability: Key takeaways from recent EU legislative initiatives, https://www.nortonrosefulbright.com/en/knowledge/publications/7052eff6/artificial intelligence-an d-liability
- The U.S. Approach to AI Regulation: Federal Laws, Policies, and Strategies Explained, https://scholarlycommons.law.case.edu/jolti/vol16/iss2/2/
- United States approach to artificial intelligence – European Parliament, https://www.europarl.europa.eu/RegData/etudes/ATAG/2024/757605/EPRS_ATA(2024)7576 05_ EN.pdf
- AI Watch: Global regulatory tracker – United Kingdom | White & Case LLP, https://www.whitecase.com/insight-our-thinking/ai-watch-global-regulatory-tracker-united kingdo m
- AI and Sustainability Reporting: A Comparative Analysis of Regulatory Frameworks in the UK, EU, and US, https://aijourn.com/ai-and-sustainability-reporting-a-comparative analysis-of-regulatory-framewo rks-in-the-uk-eu-and-us/
- UNMANNED AIRCRAFT: DEFINING PRIVATE AIRSPACE – National Association of Mutual Insurance Companies, https://www.namic.org/wp content/uploads/legacy/drones/1703_privateairspace.pdf
- The Rise of Drones and the Erosion of Privacy and Trespass Laws | American Bar Association,https://www.americanbar.org/content/dam/aba/publications/air_space_lawyer/fall 2020/asl_v033 n03_fall2020_gipson.pdf
- Protecting rural estates from drones and trespass – Farrer & Co, https://www.farrer.co.uk/news-and-insights/protecting-rural-estates-from-drones-and trespass/
- English High Court recent rulings consider landowners’ rights to an injunction against drone users | Osborne Clarke, https://www.osborneclarke.com/insights/english-high-court recent-rulings-consider-landowners-ri ghts-injunction-against-drone
- Protecting people’s privacy (points 21 to 26) | UK Civil Aviation Authority, https://www.caa.co.uk/drones/getting-started-with-drones-and-model-aircraft/drone code/protecti ng-people-s-privacy-points-21-to-26/
- Global Drone Regulations: A Comparison of EU/UK and US Drone Laws – AirSight, https://www.airsight.com/blog/global-drone-regulations-a-comparison-of-eu/uk-and-us drone-laws
1 P Rathore, ‘Introduction to Cyber Law and Understanding the Evolution, Scope and Significance of Cyber Law’ (2025) 3 Int J Criminol Criminal Law 4; Michalsons, ‘What is IT law, ICT law or Cyber law?’.
2 I Russ, ‘Legal Regulation of the Internet: Evolution from Self-Regulation to State Control’ (2013) 7 JDFSL 2, 1.
3 Council of Europe, Budapest Convention on Cybercrime (2001), Arts 15-17.
4 R v Gold and Schifreen (1987).
5ibid.
6 Computer Misuse Act 1990, ss 1–3; Dr F Mears and J O’Hara, ‘The Computer Misuse Act 1990: 30 Years Old’ (2020) 1 J Civ Lib 12.
7 Crown Prosecution Service, ‘Computer Misuse Act: Prosecution Guidance’
8 Dr J Child and others, Reforming the Computer Misuse Act (University of Birmingham & Cambridge, 2020).
9 The Lawfare Institute, ‘Jurisdictional Challenges in Cross-Border Cyber Blackmail Cases’.
10 Leppard Law, ‘Challenges in Enforcing Federal Law Across Borders’
11 Council of Europe, ‘Accession by other non–Council of Europe states’ ; V Gorskiy, ‘The First Global Treaty Against Cybercrime’ (2023) 5 Russ L J 1, 2.
12 (ISC)², ‘The Ethical Dilemmas of AI in Cybersecurity’.
13 European Parliament, Artificial Intelligence Act: High-Level Summary (2024).
14 Norton Rose Fulbright, ‘Artificial Intelligence and Liability’.
15 A Case, ‘How the United States Approaches Artificial Intelligence Governance’ (2024)
16 Case W Res J Int’l L 1.16 European Parliament Research Service, ‘Current global artificial intelligence policy landscape’ (2024).
17 White & Case, ‘AI Watch: Global Regulatory Tracker – United Kingdom’.
18 K Simmonds and A M Gutiérrez, ‘AI and Sustainability Reporting’ (2024) 4 AI J 1, 3.
19 T Gipson, ‘Drones, Airspace and Property Rights’ (2020) 33 Air & Space Law 3, 2.
20 ibid; Civil Aviation Act 1982, s 76.
21 Anglo International Upholland Ltd v Wainwright & Persons Unknown EWHC 10 (Ch); Farrer & Co, ‘Protecting Rural Estates from Drones and Trespass’.
22 MBR Acres Ltd v Curtin EWHC 11 (QB); Osborne Clarke, ‘English High Court recent rulings consider landowners’ rights to an injunction against drone’.
23 Civil Aviation Authority, ‘The Drone Code: Protecting People’s Privacy’.
24 Airsight, ‘Global Drone Regulations: A Comparison of EU/UK and US Drone Laws’.





