Home » Blog » EMERGING TECHNOLOGIES AND THE LAW: A COMPARATIVE ANALYSIS OF TECHNOLOGICAL DISRUPTION AND THE LIMITS OF LEGAL FRAMEWORKS

EMERGING TECHNOLOGIES AND THE LAW: A COMPARATIVE ANALYSIS OF TECHNOLOGICAL DISRUPTION AND THE LIMITS OF LEGAL FRAMEWORKS

Authored By: LUBUTO MOONDE

University of Zambia

ABSTRACT

Technology is the application of scientific knowledge for practical purposes, particularly in industry, communication and human development. It refers not only to physical tools, machines and devices, but also to processes, methods and systems designed to solve problems or achieve specific objectives such as Artificial Intelligence tools. Artificial Intelligence (AI) in essence refers to the development of computer systems that can perform tasks that typically require human intelligence. At its core, AI is the ability of a machine to simulate human cognitive functions such as learning, reasoning, problem-solving, perception and decision-making. The law refers to rules that govern human conduct. Therefore, this discourse aims at critically analyzing the impact of technology on the law and how the law faces technological challenges. Further, this discourse will provide recommendations on how legal frameworks can respond to the growing technological advancements in the legal environment.

INTRODUCTION

The exponential growth of Artificial Intelligence (AI) technologies has created both opportunities and profound legal challenges. Unlike previous waves of technological innovation, AI does not merely augment human activity but increasingly makes autonomous, data-driven decisions that raise fundamental questions for existing legal frameworks. These challenges manifest across multiple domains, including liability, data protection, fundamental rights, labor law, intellectual property and governance. The law, traditionally reactive and slow to evolve, struggles to keep pace with AI’s disruptive capacity, necessitating doctrinal re-examination and normative reform. The law faces a fundamental challenge whether to adapt through incremental reform or embrace transformative regulatory models. This discourse, therefore, explores these challenges, drawing on international authorities and comparative jurisprudence with Zambia with a view to illustrating both the gaps and possible avenues for reform.

Artificial Intelligence and the Challenge to Legal Liability and Accountability

Traditional liability frameworks in tort, product liability and criminal law rest on the assumption that responsibility can be attributed to identifiable human actors exercising control over their conduct. AI systems, however, increasingly operate with autonomy and unpredictability, thereby complicating the attribution of legal responsibility. A pertinent example is the case of autonomous vehicles. When an AI-controlled car causes harm, it remains unclear whether liability should be assigned to the programmer, the manufacturer, the owner or operator of the vehicle, or whether entirely new legal categories are required to capture the role of the AI system itself. This difficulty is compounded by the “black box” nature of many AI algorithms, which frustrates the evidentiary requirements of traditional fault-based models.

The European Union has explicitly recognized this regulatory gap. In its draft AI Liability Directive,[1] the EU proposes reversing the burden of proof in certain AI-related claims, thereby easing the evidentiary burden on victims by allowing for rebuttable presumptions of causation where the opacity of AI systems prevents direct proof of fault. This initiative complements the ongoing reform of the Product Liability Directive,[2] which seeks to adapt strict liability rules to account for damage caused by software and algorithmic systems. By contrast, most common law jurisdictions such as Zambia, continue to rely heavily on traditional negligence and product liability doctrines, which require proof of fault and are poorly suited to addressing the autonomous and non-linear functioning of AI.[3]

The United States has likewise adopted a piecemeal approach. The National Highway Traffic Safety Administration (NHTSA) has issued guidance on autonomous vehicle safety, but it has refrained from imposing binding statutory obligations, preferring instead a flexible, self-regulatory model.[4] This patchwork approach has been criticized for leaving significant accountability gaps, particularly in cases where victims are unable to establish a clear causal link between the harm suffered and the actions of a human actor.[5]

Fundamental Rights and Discrimination

AI systems, while often presented as neutral tools, have the capacity to entrench and even exacerbate existing social and structural biases. Algorithmic decision-making processes carry risks of bias, discrimination, arbitrary interference and inequitable access to essential services, including healthcare.[6] Such practices threaten fundamental rights to equality, non-discrimination, and due process of law. A striking example is found in Loomis v. Wisconsin,[7] where the U.S. Supreme Court was asked to consider the use of proprietary risk-assessment algorithms in criminal sentencing. The case highlighted the inherent tension between the efficiency promised by algorithmic tools and the constitutional guarantee of a fair trial, particularly where the opacity of proprietary software denies defendants the opportunity to challenge the basis of decisions affecting their liberty.

At international level, the UN Human Rights Council[8] has warned that unregulated AI systems pose significant risks to rights of privacy, freedom of expression and protection against discrimination. Similarly, the Human Rights Committee’s General Comment No. 36[9] emphasizes that states have an affirmative obligation to safeguard against technological practices that undermine the inherent dignity of individuals or the right to life, privacy and equality before the law.

In some jurisdictions like Zambia, where constitutional rights to equality, dignity and non-discrimination are protected under the Bill of Rights in the National Constitution,[10] the absence of explicit statutory or regulatory safeguards against algorithmic bias leaves a lacuna in rights protection leaving individuals vulnerable to rights infringements in contexts such as employment, financial services and healthcare. Unless addressed, this regulatory gap risks undermining Zambia’s obligations under international human rights instruments, including the International Covenant on Civil and Political Rights (ICCPR) and the African Charter on Human and Peoples’ Rights, both of which Zambia has ratified.

Data Protection, Privacy and Emerging Technologies

The rapid growth of Artificial Intelligence and biotechnology is heavily dependent on the large-scale harvesting and processing of personal and sensitive data, raising profound challenges for existing data protection frameworks. Within the European Union, the General Data Protection Regulation (GDPR)[11] provides a robust regime that enshrines principles of informed consent,[12] data minimization[13] and purpose limitation.[14] It further introduces rights tailored to the digital age, including the “right to explanation” for individuals subject to automated decision-making,[15] thereby seeking to ensure algorithmic accountability. Some scholars observe, however, that the implementation of this right remains contested, particularly in light of the opacity inherent in machine-learning processes.[16]

Judicial developments have further underscored these tensions. In Digital Rights Ireland Ltd. v. Minister for Communications, Marine and Natural Resources,[17] the Court of Justice of the European Union struck down the Data Retention Directive for disproportionate interference with privacy rights under Articles 7 and 8 of the Charter of Fundamental Rights of the EU, emphasizing the necessity of safeguards when intrusive technologies are deployed. Similarly, in Data Protection Commissioner v. Facebook Ireland and Maximillian Schrems,[18] the Court invalidated the EU–US Privacy Shield, reinforcing that adequacy in data protection requires stringent accountability mechanisms against misuse of personal data in digital ecosystems. These cases illustrate that mass, indiscriminate surveillance regimes are disproportionate and incompatible with fundamental rights and that any state measure involving large-scale data collection must meet strict proportionality tests, ensuring respect for privacy, necessity, and judicial safeguards and finally that cross-border data transfers must ensure a level of protection essentially equivalent to that guaranteed within the EU.

In Zambia, the enactment of the Data Protection Act, No. 3 of 2021[19] represents a progressive attempt to align with global best practices, recognizing privacy as a statutory right and introducing safeguards on consent, cross-border transfers and sensitive data processing. Nonetheless, significant lacunae remain as the Act does not explicitly regulate algorithmic accountability or automated decision-making, nor does it provide for a statutory analogue to the GDPR’s right to explanation. Consequently, individuals remain vulnerable to opaque forms of digital surveillance, profiling and algorithmic discrimination, particularly as AI-driven decision-making begins to penetrate critical sectors such as finance, healthcare and law enforcement.

Intellectual Property

The advent of Artificial Intelligence and biotechnology has significantly disrupted the foundational doctrines of Intellectual Property (IP), particularly the long-standing requirements of human authorship in copyright and inventorship in patent law. Traditional IP frameworks are premised on the notion that creative and inventive outputs stem from identifiable human agents exercising skill, labor or inventiveness. However, with AI systems increasingly capable of autonomously generating artworks, musical compositions and even scientific inventions, pressing legal questions arise as to whether such outputs qualify for protection, and if so, who ought to be recognized as the rights holder.

Judicial authorities have begun to grapple with these dilemmas. In Thaler v. Comptroller-General of Patents, Designs and Trademarks,[20] the court rejected the argument that an AI system, DABUS, could be designated as an inventor under the UK Patents Act, reaffirming that only natural persons may hold inventorship. Similar conclusions were reached in Thaler v. Hirshfeld[21] were relying on the plain language of the Patent Act and Federal Circuit precedent, held that an artificial intelligence machine cannot be an inventor for purposes of the Patent Act, thereby underscoring a global judicial consensus that the legal category of inventor remains tethered to human agency. These rulings highlight the doctrinal rigidity of existing patent law, even as technological realities challenge its underpinnings.

At the supranational level, the Agreement on Trade-Related Aspects of Intellectual Property Rights (TRIPS Agreement)[22] is silent on non-human creativity, implicitly presupposing human authorship and inventorship. In copyright law, similar challenges are evident. The Berne Convention for the Protection of Literary and Artistic Works[23] predicates protection on the originality of human expression. Scholarly commentary has observed that this anthropocentric orientation may be increasingly ill-suited for regulating creative outputs in the era of AI.[24]

Domestically, Zambia’s legal framework reinforces these traditional conceptions. The Patents Act[25] stipulates in Section 2 that only natural persons can be inventors, while the Copyright and Performance Rights Act[26] similarly presumes human authorship as a precondition for subsistence of rights. This adherence to human-centric definitions creates a growing tension between technological realities and statutory constructs, leaving AI-generated works and genetic inventions outside the ambit of formal protection.

Taken together, the Zambian framework demonstrates both compliance with international obligations and a reliance on traditional doctrinal categories that risk obsolescence in the face of technological change. Without reform, the law may fail to provide clarity, either by denying protection to AI-generated and genetic innovations altogether or by creating ownership vacuums that discourage investment and exploitation.

Labor and Employment Law

Furthermore, the proliferation of Artificial Intelligence and algorithmic management systems has profoundly reshaped labor markets, raising complex regulatory challenges that traditional employment law frameworks are ill-equipped to address. Automation and digital labor platforms increasingly mediate employment relationships, with implications for unfair dismissal, collective bargaining, working time regulation and occupational health and safety. Scholars argue that the rise of the “gig economy” threatens to erode core labor protections by reframing workers as independent contractors rather than employees, thereby circumventing established protections under labor law.[27]

Judicial developments illustrate this growing tension. In Uber BV v. Aslam,[28] the UK Supreme Court held that Uber drivers were workers rather than independent contractors, entitling them to minimum wage and paid leave protections under the UK Employment Rights Act. This case highlights how courts in advanced jurisdictions are adapting employment law principles to algorithmic management models.

Legislative responses have also emerged. Spain’s Riders’ Law[29] directly regulates platform work by mandating the recognition of riders as employees and imposing transparency obligations on employers regarding the functioning of algorithmic management tools. The law also introduces algorithmic transparency requirements, obligating platforms to inform their works councils about the parameters and rules used by their AI systems to manage work. This legislation aims to move platform workers away from independent contractor status and provide them with greater protections under the law. At the international level, the International Labor Organization (ILO) Future of Work Report[30] warns of widespread technological displacement, absent adequate labor protections, emphasizing the need to safeguard rights to collective bargaining, decent work and social security in the digital era.

In the Zambian context, the Employment Code Act, No. 3 of 2019[31] consolidates protections relating to unfair dismissal, minimum conditions of service and occupational safety but does not address emerging challenges posed by algorithmic management, gig economy platforms or AI-driven automation. This legislative lacuna risks leaving platform workers such as ride-hailing drivers, delivery riders and freelance digital workers outside the ambit of statutory labor rights. Further, while Article 23 of the Constitution of Zambia[32] guarantees equality and freedom from discrimination, its application to algorithmically mediated workplaces remains untested.

CONCLUSION: Responsive Legal Frameworks

AI exposes the mismatch between traditional legal concepts and the demands of autonomous technologies and as such liability frameworks struggle to allocate accountability. Data protection regimes are strained by big data practices and fundamental rights risk erosion through opaque algorithmic decision-making. Labor protections and IP doctrines, designed for human-centric economies must evolve to address AI’s disruptive impact.

A comprehensive approach to regulating emerging technologies in Zambia could involve enacting AI-specific legislation to clarify liability, promote algorithmic transparency and embed human rights safeguards and modernizing labor laws to respond to challenges of algorithmic management and workplace automation. Additionally, intellectual property law may need reform to accommodate AI-generated works through frameworks such as shared authorship or sui generis protection.

In sum, technologies not only strain the limits of current legal frameworks but also challenge the conceptual foundations of law from liability and authorship to personhood and dignity. Addressing these challenges requires a rights-based, anticipatory and multidisciplinary approach that balances innovation with justice and human rights.

REFERENCE(S):

Books Referred to:

Bertolini, A., “Artificial Intelligence and Civil Liability: The Gap Between Legal Certainty and Technological Complexity” (2020) European Journal of Risk Regulation, Vol. 11(2).

De Stefano, Valerio, The Rise of the ‘Just-in-Time Workforce’: On-Demand Work, Crowd Work and Labour Protection in the ‘Gig-Economy’ (2015).

Ryan Abbott, Artificial Intelligence and Intellectual Property: An Introduction, In Research Handbook On Intellectual Property and Artificial Intelligence, 2022.

Cases Referred to:

Digital Rights Ireland Ltd v. Minister for Communications, Marine and Natural Resources (Joined Cases C-293/12 and C-594/12, CJEU, 2014),

Donoghue v. Stevenson [1932] AC 562 (HL)

Loomis v. Wisconsin 881 N.W.2d 749 (Wis. 2016).

Data Protection Commissioner v. Facebook, Ireland and Maximillian C-311/18, CJEU, 2020.

Stephen Thaler v. Andrew Hirshfeld No. 1:20-cv-903, 2021 WL 3934803.

Thaler v. Comptroller-General of Patents, Designs & Trademarks [2021] EWCA Civ 1374

Uber BV v. Aslam [2021] UKSC 5

Statutes Referred to:

AI Liability Directive COM/2022/496 final

Automated Driving Systems 2.0: A Vision for Safety (2017).

Constitution of Zambia (Amendment) Act No. 2 of 2016 Article 23.

Copyright and Performance Rights Act No. 44 of 2010.

Employment Code Act, No. 3 of 2019.

Riders’ Law (Royal Decree-Law 9/2021)

Patents Act No. 40 of 2016 Section 2.

Product Liability Directive 85/374/EEC

The Data Protection Act, No. 3 of 2021.

U.S. Department of Transportation, National Highway Traffic Safety Administration (NHTSA)

Conventions Referred to:

Agreement on Trade-Related Aspects of Intellectual Property Rights, WTO, entered into force 1 January 1995.

Berne Convention for the Protection of Literary and Artistic Works, September 9, 1886, as amended in 1979.

General Data Protection Regulation GDPR) (Regulation (EU) 2016/679

Human Rights Committee’s General Comment No. 36 (CCPR/C/GC/36, 2018.

International Labor Organization (ILO) Future of Work Report (2019)

OECD, Artificial Intelligence in Society, 2022.

United Nations Human Rights Council (A/HRC/47/24, 2021.

Online Sources

Wachter, Mittelstadt and Floridi, Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation (December 28, 2016). International Data Privacy Law, 2017, Available at SSRN: https://ssrn.com/abstract=2903469 or 

http://dx.doi.org/10.2139/ssrn.2903469

[1] AI Liability Directive COM/2022/496 final

[2] Product Liability Directive 85/374/EEC

[3] Donoghue v. Stevenson [1932] AC 562 (HL) contrasted with the EU’s draft AI Liability Directive, COM/2022/496 final.

[4] U.S. Department of Transportation, National Highway Traffic Safety Administration (NHTSA), Automated Driving Systems 2.0: A Vision for Safety (2017).

[5] Bertolini, A., “Artificial Intelligence and Civil Liability: The Gap Between Legal Certainty and Technological Complexity” (2020) European Journal of Risk Regulation, Vol. 11(2), pp. 237–256.

[6] OECD, Artificial Intelligence in Society, 2022.

[7] Loomis v. Wisconsin 881 N.W.2d 749 (Wis. 2016).

[8] United Nations Human Rights Council (A/HRC/47/24, 2021.

[9] Human Rights Committee’s General Comment No. 36 (CCPR/C/GC/36, 2018.

[10] Constitution of Zambia (Amendment) Act No. 2 of 2016 Articles 11-23.

[11] General Data Protection Regulation GDPR) (Regulation (EU) 2016/679

[12] General Data Protection Regulation GDPR) (Regulation (EU) 2016/679 Article 7

[13] General Data Protection Regulation GDPR) (Regulation (EU) 2016/679 Article 5(1)(c)

[14] General Data Protection Regulation GDPR) (Regulation (EU) 2016/679 Article 5(1)(b)

[15] General Data Protection Regulation GDPR) (Regulation (EU) 2016/679 Article 22

[16] Wachter, Mittelstadt and Floridi, Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation (December 28, 2016). International Data Privacy Law, 2017, Available at SSRN: https://ssrn.com/abstract=2903469 or http://dx.doi.org/10.2139/ssrn.2903469

[17] Digital Rights Ireland Ltd v. Minister for Communications, Marine and Natural Resources (Joined Cases C-293/12 and C-594/12, CJEU, 2014),

[18] Data Protection Commissioner v. Facebook Ireland and Maximillian Schrems Case C-311/18, CJEU, 2020

[19] The Data Protection Act, No. 3 of 2021.

[20] Thaler v. Comptroller-General of Patents, Designs & Trademarks [2021] EWCA Civ 1374 (UK Court of Appeal).

[21] Stephen Thaler v. Andrew Hirshfeld No. 1:20-cv-903, 2021 WL 3934803

[22] Agreement on Trade-Related Aspects of Intellectual Property Rights, WTO, entered into force 1 January 1995.

[23] Berne Convention for the Protection of Literary and Artistic Works, September 9, 1886, as amended in 1979.

[24] Ryan Abbott, Artificial Intelligence and Intellectual Property: An Introduction, In Research Handbook On Intellectual Property and Artificial Intelligence, 2022.

[25] Patents Act No. 40 of 2016 Section 2.

[26] Copyright and Performance Rights Act No. 44 of 2010.

[27] De Stefano, Valerio, The Rise of the ‘Just-in-Time Workforce’: On-Demand Work, Crowd Work and Labour Protection in the ‘Gig-Economy’ (2015).

[28] Uber BV v. Aslam [2021] UKSC 5

[29] Riders’ Law (Royal Decree-Law 9/2021)

[30] International Labor Organization (ILO) Future of Work Report (2019)

[31] Employment Code Act, No. 3 of 2019.

[32] Constitution of Zambia (Amendment) Act No. 2 of 2016 Article 23.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top