Authored By: Caroline Atuhaire
Abstract
In light of the Artificial Intelligence Act, Regulation (EU) 2024/1689, and the updated Product Liability Directive, this article explores Europe’s human-centric approach to AI governance. It makes the case that the EU’s risk-based regulatory framework incorporates transparency, accountability, and fundamental rights protection into digital innovation. The classification of AI systems according to risk level, the responsibilities placed on deployers and providers, and the ban on discriminatory or manipulative AI practices are all covered in the conversation. In order to improve individual protection, it also takes into account how ex ante compliance under the AI Act relates to ex post liability and redress under the Product Liability Directive. The article’s conclusion states that these actions reinterpret digital responsibility in the era of algorithms, signalling a change from unchecked innovation to moral leadership.
Introduction
One winter evening in a small Thessaloniki office, I stopped to consider the way a single innocent algorithm had been used to influence the life of a person, a job candidate being filtered by an algorithm, a route taken by a model of logistics, a student being offered a smaller scholarship due to an algorithm. Being a legal, risk, and compliance professional, I am used to being cautioned on black-box systems, white-box models, and high-speed innovation that runs swiftly ahead of regulatory scrutiny. But it is during such moments, the human moments, when we see that behind all systems, there is a person, and his or her future, dignity or rights may be involved. It is exactly this human aspect, which motivates the Europe regulatory approach toward artificial intelligence (AI). The aim of the Artificial Intelligence Act is to stimulate the adoption of human-centric and responsible artificial intelligence and a high degree of protection of health, safety, and fundamental rights outlined in the Charter of Fundamental Rights of the European Union1. The AI Act and the updated Product Liability Directive (“PLD”) represent a decisive turning point, as digital innovation cannot be advanced unchecked, and the rules should be used to further human rights, safety, and accountability to establish international minimum standards of credible AI2.
Statement
This article examines Europe’s risk-based AI framework redefining digital responsibility by embedding ethical governance, accountability and human rights at the core of technological development.
Research Methodology
This article employs a doctrinal and analytical approach. It draws upon EU legislation, official communications, and academic commentary to evaluate the legal and compliance implications of the AI Act and the revised Product Liability Directive. The analysis focuses on the regulatory architecture, enforcement mechanisms, and ethical implications of Europe’s human-centric model for AI governance.
Legal Framework: The EU’s Risk-Based AI Regulation.
The AI Act places a risk-based system where the AI systems are classified as per their use case, and thus the compliance requirements are set based on the degree of risk that AI systems present to its users3. This hierarchy identifies four categories which include; unacceptable risk, high risk, transparency risk, and minimal risk4. On the topmost tier, unethical AI practices that create an unacceptable danger to the Union values and fundamental rights are banned. Such bans are initiated on AI-based systems that take advantage of the weaknesses of a particular population5. Most importantly, they forbid social scoring systems that categorise or assess natural persons according to their social behaviour, which can result in unfair treatment. The use of real-time remote biometric identification systems in publicly accessible spaces for law enforcement, except under exhaustively listed and narrowly defined emergency objectives, which require prior judicial or independent administrative authority authorization is also prohibited6.
The core of the regulation focuses on high-risk AI systems, which are defined as syatems that negatively affect fundamental rights7. These systems fall into two pathways, firstly, safety components in products covered by current EU harmonization laws for instance medical devices8and secondly, systems deployed in specific fields that threaten safety or fundamental rights for instance employment, essential public services, law enforcement, migration management, and administration of justice 9. Regardless of which pathway applies, an AI system is always considered high-risk if it profiles natural persons. However, a derogation exists, an Annex III system is not high-risk if it performs only a narrow procedural task, improves the result of a previously completed human activity, detects decision-making patterns without replacing or influencing a human assessment, or performs a preparatory task to a relevant assessment10.
Compliance Obligations and Value Chain Responsibility.
In the case of the providers of these high-risk systems, the compliance regime is very rigid and involves certain legal obligations. These requirements encompass the authoring and upkeep of a Risk Management System (RMS) as a cyclic, iterative undertaking through the lifecycle of AI systems, to manage and recognize the known and reasonably foreseeable threats to health, safety, and fundamental rights11. To reconcile the quality of training, validation, and testing data to particular quality standards, including the provision of data that is relevant and representative enough, and the measures to prevent and address the biases, providers need to implement strict data governance practices. Providers must draw up and keep updated comprehensive technical documentation before the system is placed on the market or put into service12. In order to guarantee traceability and post-market monitoring, high-risk systems must also technically permit the automatic recording of events throughout their lifetime. Furthermore, high-risk systems must be designed for effective human oversight using appropriate human-machine interface tools, ensuring human operators possess the necessary competence, training, and authority to properly understand, monitor, and, if necessary, override or interrupt the system13. Finally, high-risk systems must achieve and maintain an appropriate level of accuracy, robustness, and cybersecurity throughout their lifecycle. Providers are required to conform to the given compliance to prove this, so they have to undergo some conformity assessment internal control or third-party assessment by the Notified Body, depending on the type of the system, prepare an EU declaration of conformity and fasten the CE marking.
Responsibility extends across the entire value chain. Deployers or users of high-risk AI systems must take appropriate technical and organizational measures to ensure use aligns with the provider’s instructions, guaranteed human oversight is assigned to competent persons, and continuously monitor the system’s operation14. Deployers who are public bodies or private entities providing essential public services such as healthcare, social security benefits, or creditworthiness evaluation must conduct a fundamental rights impact assessment (FRIA) prior to deployment. Every operator, provider and deployer needs to make sure that their employees are sufficiently knowledgeable about AI literacy. Moreover, distributors, importers, or deployers will be deemed providers and assume all associated obligations if they put their name on a system already on the market, make a substantial modification, or modify its intended purpose such that it becomes high-risk. Non-EU providers must appoint an authorized representative established in the Union by written mandate.
Beyond Borders: The Global Reach of EU AI Law.
The ambition of the AI Act reaches beyond the EU’s borders called the “Brussels Effect”. The regulation applies to providers placing systems on the market, putting them into service, irrespective of their establishment location. It also applies to providers and deployers established in a third country where the output produced by the AI system is intended to be used in the Union15. Serious consequences for non-compliance include fines of up to €35 million or 7% of the global yearly turnover for violating the banned AI practices. Non compliance with high-risk obligations can result in fines up to €15 million or 3% of worldwide annual turnover, and supplying incorrect information is punishable by fines up to €7.5 million or 1% of worldwide annual turnover16.
Redress and Liability under the product liability directive.
Even though the AI Act regulates ex ante compliance, the reformulated Product Liability Directive (PLD) regulates ex post liability and redress to victims17. The new PLD substantially broadens the definition of a product to give specific emphasis on software, AI systems, and digital components so that strict liability is applied to them, which means that the claimant simply has to demonstrate defectiveness, damage, and the causal connection. The reformed regime in complex cases tries to get over these obstacles, such as information asymmetry and problems of information unclarity, by providing rebuttable presumptions of incompetence or causality where the defendant has not provided relevant evidence or where the product breaches the mandatory safety standards18. This is intended to guarantee that victims of harm caused by AI receive the same degree of protection under fault based liability laws as victims.
Nonetheless, commentators observe that the status of the companion AI Liability Directive now (AILD) due to uncertainty can present gaps in redress especially in claim based on fault.
The road to Implementation.
The phased implementation schedule demands prompt action from legal, risk, and compliance professionals, turning demands into governance realities. There are eight main priorities involved in this change. Firstly, all current and pipeline AI systems should be asset mapped in order to conclusively classify their risk. Secondly, establishing a quality management system and establishing a clear governance committee with roles that include a provider, deployer, importer, distributor and authorised representative will enable governance to be embedded at an early stage19. Third, by using supply-chain due-diligence reviewing vendor contracts about third-party AI systems to comply with the EU standards, provide an audit right and liability. Fourth, ensuring meticulous documentation and technical traceability by mandating the creation and retention of technical documentation and automatic logs for at least six months20. Fifth, operationalizing human oversight by designing effective human-AI interfaces and processes that guarantee the human operator has the competence and technical capability to override outputs, and performing Fundamental Rights Impact Assessments where required. Sixth, maintaining strong post-market surveillance and incident reporting, as well as, instituting documented methods of gathering, examining, and reporting severe incidents or malfunctions to the national competent authorities and the AI Office. Seventh, it is important to understand that extraterritoriality implies that the EU standard often serves as the de facto standard in the world.
Conclusion: Toward a culture of Ethical Resilience.
Conclusively, both the AI Act and the updated PLD can be described as a clear signal that digital innovation needs to be human-centered and transparent. This serves as a defining moment which can be used by legal, risk, and compliance professionals to establish systems, structures and cultures of compliance not to box-check, but to create cultures of ethical resiliency. The long-term dividend is trust by translating these laws into proactive models of governance that reduce the risks and protect the rights. Trust can be the most precious form of currency when operating in the fast-changing digital economy.
Bibliography
Primary Sources
European Union Legislation
- Artificial Intelligence Act (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union legislative acts OJ L, 2024/1689.
- Charter of Fundamental Rights of the European Union OJ C326/391. • Directive (EU) 2024/1799 of the European Parliament and of the Council of 18 June 2024 on liability for defective products (Product Liability Directive – Recast) OJ L, 2024/1799.
- Proposal for a Directive of the European Parliament and of the Council on adapting non-contractual civil liability rules to artificial intelligence (AI Liability Directive) COM(2022) 496 final.
Secondary Sources
Books, Reports, and Official Publications
- Bertolini A, AI and Liability: A European Perspective (European Parliament, 2020). • European Commission, Ethics Guidelines for Trustworthy AI (High-Level Expert Group on Artificial Intelligence, 2019).
- European Commission, White Paper on Artificial Intelligence: A European Approach to Excellence and Trust COM(2020) 65 final.
- European Data Protection Supervisor (EDPS), Opinion 14/2021 on the Proposal for a Regulation Laying Down Harmonised Rules on Artificial Intelligence (AI Act). • European Parliament Research Service, Product Liability Directive (Recast): Legislative Briefing (2024).
- European Parliament Research Service, The Artificial Intelligence Act: Key Issues and Changes (2024).
Journal Articles and Academic Papers
- Arcila B, ‘AI liability in Europe: How does it complement risk and compliance?’ (2024) Artificial Intelligence & Law.
- Floridi L, ‘Establishing the Rules for Building Trustworthy AI’ (2019) 568 Science 693. • Hacker P, ‘The European AI Liability Directives — Critique of a Half-Hearted Approach and Lessons for the Future’ (2022) arXiv <https://arxiv.An AI Liability
- Regulation would complete the EU’s AI strategy’ https://www.ceps.eu/an-ai-liability regulation-would-complete-the-eus-ai-strategy/ accessed 13 November 2025. • Deloitte, ‘EU AI Act: introducing a framework for deployment and usage of AI within the EU’ https://www.deloitte.com/nl/en/services/consulting-risk/analysis/eu-ai act.html accessed 13th November 2025.
- Enterprise Ireland, ‘EU Artificial Intelligence (AI) Act’ https://enterprise.gov.ie/en/what-we-do/innovation-research-development/artificial intelligence/eu-ai-act/ accessed 13 November 2025.
- European Commission, ‘AI Act: Shaping Europe’s digital future — Regulatory framework on AI’ https://digital-strategy.ec.europa.eu/en/policies/regulatory framework-ai accessed 13 November 2025.
- European Parliamentary Research Service, ‘Artificial intelligence liability directive’ [URL not provided in source] accessed [Date Unknown].
- IBM Think, ‘What is the EU AI Act?’ https://www.ibm.com/think/topics/eu-ai-act accessed 13 November 2025.
- Norton Rose Fulbright, ‘Artificial intelligence and liability: key takeaways from the EU’s reforms’ [URL not provided in source] accessed 13 November 2025. • Pinsent Masons, ‘Revised EU product liability regime expands to AI software providers’ https://www.pinsentmasons.com/out-law/analysis/revised-eu-product-liability-regime expands-ai-software-providers accessed 13 November 2025.
- YJOLT, ‘Limitations and Loopholes in the EU AI Act and AI Liability Directives: What it means for the European Union and United States’ https://yjolt.org/limitations and-loopholes-eu-ai-act-and-ai-liability-directives-what-means-european-union-united accessed 13 November 2025.
1Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 on artificial intelligence [2024] OJ L168/1 (Artificial Intelligence Act) recital 1.
2AI liability in Europe: How does it complement risk regulation and deal with the problem of human oversight? (Beatriz Botero Arcila) (2024) 54 Computer Law & Security Review 106012; DOI: 10.1016/j.clsr.2024.106012
3Botero Arcila B, ‘AI liability in Europe: How Does It Complement Risk Regulation and Deal with the Problem of Human Oversight?’ (2024) 54 Computer Law & Security Review 106012.
4Deloitte, European Union Artificial Intelligence Act: Deep Dive (Deloitte Global, March 2025) https://www.deloitte.com/nl/en/services/consulting-risk/analysis/eu-ai-act.html accessed 13 November 2025.
5Kosinski M and Scapicchio M, ‘What is the EU AI Act?’ (IBM Think, 2024)
https://www.ibm.com/think/topics/eu-ai-act accessed 13 November 2025.
6European Parliament, ‘EU AI Act: first regulation on artificial intelligence’ (European Parliament, 8 June 2023) https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first regulation-on-artificial-intelligence accessed 13 November 2025.
7European Parliament, ‘EU AI Act: first regulation on artificial intelligence’ (European Parliament, 1 February 2025) https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu‑ai‑act‑first‑regulation‑on‑artificial‑inte lligence accessed 13 November 2025.
8 Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 on artificial intelligence and amending Regulations (EC) No 300/2008
9 Deloitte, European Union Artificial Intelligence Act: Deep Dive (Deloitte Global, March 2025) https://www.deloitte.com/nl/en/services/consulting-risk/analysis/eu-ai-act.html accessed 13 November 2025.
10 Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 on artificial intelligence and amending Regulations (EC) No 300/2008
11 Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 on artificial intelligence and amending Regulations (EC) No 300/2008
12 Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 on artificial intelligence and amending Regulations (EC) No 300/2008 13 Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 on artificial intelligence and amending Regulations (EC) No 300/2008
14 Deloitte, European Union Artificial Intelligence Act: Deep Dive (Deloitte Global, March 2025) https://www.deloitte.com/nl/en/services/consulting-risk/analysis/eu-ai-act.html accessed 13 November 2025.
15 Deloitte, European Union Artificial Intelligence Act: Deep Dive (Deloitte Global, March 2025) https://www.deloitte.com/nl/en/services/consulting-risk/analysis/eu-ai-act.html accessed 13 November 2025.
16 Kosinski M and Scapicchio M, ‘What Is the EU AI Act?’ (IBM Think, 2024)
https://www.ibm.com/think/topics/eu-ai-act accessed 13 November 2025.
17Botero Arcila B, ‘AI liability in Europe: How Does It Complement Risk Regulation and Deal with the Problem of Human Oversight?’ (2024) 54 Computer Law & Security Review 106012.
18 Pollard B, Prochaska K and Sethi R, EU’s Revised Product Liability Directive: The Impact on the Legal, Business, and Operational Landscape (Willkie Farr & Gallagher LLP, 3 February 2025) https://www.willkie.com/-/media/files/publications/2025/02/eus-revised-product-liability-directive.pdf accessed 13 November 2025.
19 Deloitte, European Union Artificial Intelligence Act: Deep Dive (Deloitte Global, March 2025) https://www.deloitte.com/nl/en/services/consulting-risk/analysis/eu-ai-act.html accessed 13 November 2025.
20 Botero Arcila B, ‘AI liability in Europe: How Does It Complement Risk Regulation and Deal with the Problem of Human Oversight?’ (2024) 54 Computer Law & Security Review 106012.





