Authored By: Daniel Adewale Tella
University of Law
Abstract
This article critically examines the judiciary’s growing use of artificial intelligence (AI) in the interpretation and justification of criminal adjudication. It evaluates whether existing legal frameworks adequately address the ethical, procedural, and constitutional challenges arising from AI’s integration into judicial processes—particularly with respect to fairness, transparency, and reliability. Using a doctrinal and comparative methodology, the paper analyses legislation, case law, and academic commentary across the United Kingdom, European Union, and United States.
Findings reveal the persistence of a legal vacuum between the judiciary’s increasing reliance on AI-assisted reasoning and the effective protection of due process rights. The study argues that algorithmic bias and systemic inequalities risk exacerbating pre-existing disparities within criminal justice systems, disproportionately affecting young people and ethnic minorities. It concludes that without statutory reform to enhance transparency, accountability, and human oversight, the use of AI in criminal adjudication threatens to undermine public confidence in justice and the rule of law.
- Introduction
Artificial intelligence has become an indispensable component of modern governance, transforming fields from healthcare to finance. Yet, when applied to the administration of criminal justice, it raises unprecedented ethical and constitutional questions. AI systems have been deployed in areas such as predictive policing, facial recognition, forensic analysis, and even judicial sentencing.[1] These applications promise efficiency and consistency, but they also challenge the fundamental values that underpin the rule of law—human reasoning, moral judgment, and independence of the judiciary.[2]
A lay observer may perceive AI as a neutral and objective tool; however, its use in criminal adjudication reveals deeper tensions between technological innovation and legal accountability. The central question that motivates this study is whether existing legal frameworks are sufficiently robust to regulate the judiciary’s use of AI and to safeguard defendants’ rights to fairness and due process.
This paper argues that they are not. It contends that while AI may assist in decision-making, its opacity and potential for bias risk compromising judicial impartiality and equality before the law.[3] Furthermore, the reliance on algorithmic systems threatens to blur the traditional boundaries of actus reus and mens rea, raising critical questions about culpability, intent, and the very meaning of justice in the digital age.[4] This concern urges a reflection on the importance of judicial independence which the Constitutional Reform Act itself has successfully created the separation of powers (for instance the creation of the Supreme Court).[5] This statute does not only set the standard against government ministers, but should equally apply to AI usage in the UK constitution.
Lord Bingham’s eight principles of the Rule of Law further echo the voice of this paper in the basic requirements of the ‘law being applied equally to all’ and that ‘judicial procedures must be fair’.[6] Allowing AI and autonomous systems to compromise or delay the application of these principles risks creating long-term vulnerabilities, potentially opening the floodgates to systemic change over time.
- Development of AI in Criminal Law
2.1 The Emergence of AI in Judicial and Policing Contexts
The mid-2010s marked a significant turning point in the United States with the introduction of algorithmic risk-assessment tools such as COMPAS (Correctional Offender Management Profiling for Alternative Sanctions).[7] These systems aimed to enhance the consistency of sentencing and predict the likelihood of reoffending. However, investigations later revealed substantial racial and socio-economic bias, as well as the problematic secrecy surrounding proprietary algorithms.[8]
ProPublica’s 2016 analysis demonstrated that COMPAS falsely labelled Black defendants as high-risk at nearly twice the rate of white defendants.[9] COMPAS algorithmic parameters were undisclosed and defendants were unable to contest the reliability of the evidence used to influence sentencing: “COMPAS is a proprietary algorithm, its inner workings are not publicly disclosed, raising concerns about accountability and defendants’ rights to challenge automated decisions.”[10] This “black box” dilemma illustrates the collision between technological innovation and the principle of open justice which was raised in Loomis [2016].[11] A similar concern was raised in the Court of Appeal, where the obiter dicta explains that South Wales Police were in breach of their Public Sector Equality duty by failing to “make enquiries about whether the technology had bias on racial … grounds.”[12]
Transparency and privacy therefore become complex as laws are limited, and companies maintain confidentiality on AI decision making. For instance, Articles 13-15 GDPR provides disclosure of “meaningful information” but the interpretation of this is narrow enough to neglect issues of biasness and remains a black box.[13] The UK Data Protection Act which incorporates GDPR preserves exemptions for intellectual property which would not make organisations required to reveal algorithms.[14]
Similarly, the use of facial recognition technology has produced a series of wrongful arrests, most notably Williams v City of Detroit Police Department [2020] and the wrongful imprisonment of Quran Reid in Louisiana [2022].[15] In both cases, faulty algorithmic matches led to detentions later found to be baseless. Such examples underscore how algorithmic fallibility can translate into human injustice, disproportionately impacting individuals from minority communities.[16] In contexts where the deprivation of liberty is at stake, even a single erroneous match is one too many, reinforcing the need for stringent safeguards before such technologies are relied upon to avoid miscarriages of justice.
2.2 The Doctrinal Problem: Actus Reus and Mens Rea in an Algorithmic Age
At the doctrinal level, AI challenges the two foundational pillars of criminal liability: actus reus (the guilty act) and mens rea (the guilty mind).[17] While AI systems can produce physical outcomes causing harm through action or omission (actus reus), they lack consciousness, intention, or moral awareness.[18] The absence of mens rea undermines traditional notions of culpability as in R v Nedrick [1996] and R v Woollin [1998].[19]
Legal scholars such as Hallevy and Sartor have proposed models of “indirect liability,” suggesting that responsibility might shift from the AI itself to the humans who develop, program, or deploy it.[20] Yet these models remain conceptually strained. The question of whether autonomous systems can ever be treated as moral or legal agents in determining full criminal responsibility continues to divide opinion.
Without a clear allocation of responsibility, AI-assisted decisions risk creating accountability gaps where neither human nor machine can be properly held liable. This threatens both the deterrent function and the moral legitimacy of criminal law.[21]
- Legal Frameworks and Regulatory Approaches
3.1 The European Union: Towards Structured Regulation
The European Union has taken the most advanced legislative steps through its proposed Artificial Intelligence Act (AIA).[22] The AIA adopts a risk-based approach, classifying AI systems according to their potential impact on fundamental rights. Judicial and law enforcement applications are deemed “high-risk,” requiring conformity assessments, transparency, and human oversight.[23] Article 50 AIA is also phrased with the words: ‘certain AI system’ whose providers are subject to transparency obligations. The caveat to article 50 is that it does not only overlap generative AI and general-purpose AI, but Art.50(5) creates exemptions of usage for law enforcement which raises concerns on risks posed by the intentional special treatment to: “detect, prevent, investigate or prosecute criminal offences”.[24] This authority creates room for further issues like Williams [2020], Quran [2022] as well as due process protections and reliance on evidence.[25]
However, as also understood by the Italian scholar Donini, whilst the AIA mandates transparency, it does not impose a binding duty to prevent harm or to adopt corrective measures once risks are identified.[26] Nor does it fully address the proprietary nature of most AI systems, leaving transparency largely at the discretion of private developers. The European Parliament’s Resolution of 6 October 2021 on Artificial Intelligence in Criminal Law reaffirmed that AI must assist, not replace, human judgment, yet this resolution is political rather than legal and therefore lacks binding effect.[27] The resolution echoed how AI applications in policing and judiciary must respect fundamental rights under the Charter of Fundamental Rights of the European Union and the European Convention on Human Rights.[28] Nevertheless, the legislations incompleteness leaves significant gaps, particularly in the attribution of liability where algorithmic decisions contribute to miscarriages of justice.
3.2 The United Kingdom: Principle-Based Governance
The United Kingdom has adopted a principle-based approach, guided by five core values outlined in government policy: safety, transparency, fairness, accountability, and contestability.[29] These principles complement existing laws such as the Equality Act 2010 and the Human Rights Act 1998.[30]
This model offers flexibility and promotes innovation but lacks enforceability.[31] It provides no statutory mechanism for redress when AI systems malfunction or perpetuate discrimination. Oversight is sector-specific rather than universal, meaning that criminal justice—despite its high stakes—receives no special protection.
The UK’s reliance on pre-existing principles, rather than creating a dedicated AI statute, risks leaving courts and defendants uncertain about the legal standards governing algorithmic evidence.
3.3 The United States: Fragmented and Sectoral Regulation
The United States, in contrast, regulates AI through a combination of federal and state-level initiatives. The Federal Trade Commission has issued guidance on algorithmic accountability, and local jurisdictions have passed ordinances restricting facial recognition technologies.[32] Yet there is no overarching federal framework governing AI in criminal justice.
This fragmented approach leads to inconsistent protections and creates jurisdictional disparities. The lack of transparency in proprietary systems like COMPAS, coupled with uneven local oversight, continues to expose defendants to risks of error and bias.[33]
Comparatively, the EU’s AIA, although incomplete, provides a more coherent and enforceable model for balancing innovation with fundamental rights.
- Critical Analysis and Alternative Perspectives
AI’s integration into criminal justice reveals a structural paradox: while it promises objectivity, it reproduces and amplifies human bias. Current frameworks, whether principle-based (UK), sectoral (US), or risk-based (EU), have yet to establish a unified standard for liability or fairness.[34]
4.1 Separate legal personality and AI
The question of accountability remains central. If AI cannot form intent or foresee harm, the moral basis for criminal liability is eroded. One possible solution is the recognition of limited “AI legal personality,” analogous to corporate legal personality established in Salomon [1897].[35] Lord Halsbury’s obiter dicta stated: “Either the limited company was a legal entity or it was not. If it was, the business belonged to it and not to Mr. Salomon… If it was not, there was no person and nothing to be an agent at all; and it is impossible to say at the same time that there is a company and there is not.”[36] Just as a company can act independently of its shareholders, AI could theoretically bear its own responsibilities within defined legal boundaries.
However, company law also provides a corrective mechanism—the doctrine of lifting the corporate veil—to prevent injustice or fraud.[37] By analogy, a similar doctrine could be introduced in criminal law to “pierce the AI veil,” holding developers or operators liable when their systems cause harm due to negligence, recklessness, or discriminatory design.
This analogy illustrates how the common law could evolve to close accountability gaps without waiting for exhaustive statutory reform.[38] It also underscores the importance of aligning legal innovation with the principles of justice and equality that underlie the criminal law.
4.2 Constructive Manslaughter and AI
One may consider the potential application of constructive (unlawful act) manslaughter to AI-related harm, which establishes liability for unlawful and dangerous acts even in the absence of intent to kill.[39] The test relies on what a reasonable person would recognise as carrying ‘some risk of harm,’ which could, in principle, apply to AI systems that act in ways foreseeably dangerous.
In R v Larkin [1943], the court confirmed that the act need not be directed at the victim, so long as it is inherently dangerous and results in death.[40] Applied to AI, this suggests that developers or operators could be liable if their AI engages in inherently risky behaviour, even if harm was not specifically targeted at an individual.
Further, the principle in R v Goodfellow [1986] extended unlawful act manslaughter to non-violent and indirect acts.[41] This supports the potential liability for technologically mediated harm where an AI indirectly causes death or injury, such as through errors in automated healthcare systems or autonomous vehicles.
Due to the complexity of various actors, including developers, institutions, and regulatory bodies -determining whose actions constitute the ‘unlawful act’ may require a nuanced approach that considers foreseeability, control over the AI, and adherence to professional or statutory duties.
4.3 The Mischief Rule
Following with the complexity of constructive manslaughter, interpretive tools such as the mischief rule, which traditionally allow courts to apply statutes in line with legislative intent, are similarly strained.
The power of judicial interpretation through the mischief rule is traditionally intended to identify and suppress the mischief the legislature sought to prevent.[42] However, AI presents a fundamental challenge to this approach because AI systems operate in ways that were not conceived when existing laws and constitutions were drafted. The application of the mischief rule may no longer reflect the intent of the legislature but instead reflect the outputs or decisions of AI itself.[43] In other words, in some cases, the “voice of the law” may be mediated—or even supplanted—by AI, raising questions about accountability and judicial authority.
As observed in the Indian Journal of Law and Legal Research (2021) 3(4) IJLLR 112, “the mischief rule may not be a good tool when new circumstances emerge, with cases unforeseen by the legislator and beyond the reach of the statute … a new statute may be needed.”[44] This observation underscores that interpretative principles alone are insufficient to address the novel challenges introduced by AI, highlighting the need for statutory reform. Laws drafted in a pre-AI era cannot fully anticipate or regulate AI-driven harms, making legislative innovation essential to ensure clarity, responsibility, and fairness in the deployment of such technologies.
- Reform and Recommendations
Reform in this area must prioritise human rights, fairness, and transparency over technological efficiency. Scholars such as Ashworth, Donini, and Carvalho have long argued that criminal law must rest on principled grounds rather than on risk management alone.[45] AI regulation should reflect that normative foundation.[46]
First, legislation should prohibit the use of AI-generated matches or predictions as determinative evidence of guilt without independent human verification. AI outputs must assist investigations, not replace judicial reasoning.[47] Such safeguards would preserve the right to a fair trial under Article 6 of the ECHR.[48]
Secondly, transparency obligations should compel disclosure of algorithms, datasets, and testing methodologies relevant to criminal proceedings.[49] Defendants must have the ability to scrutinise and challenge algorithmic evidence—a requirement of both due process and equality of arms.[50]
Third, a dedicated statutory framework should define the boundaries of AI liability.[51] This framework could adopt a tiered model distinguishing between creators, deployers, and end-users. Where necessary, courts should be empowered to “pierce the AI veil,” ensuring that corporate entities deploying high-risk systems cannot evade accountability through technological complexity.[52]
Finally, ethical reform should move beyond negative prevention to a restorative approach that prioritises public trust and welfare. As Vassalli and Donini suggest, the legitimacy of AI in justice depends on embedding human values—dignity, transparency, and proportionality—into its governance.[53]
- Conclusion
The integration of artificial intelligence into criminal adjudication represents a defining challenge for modern legal systems. While technology offers potential efficiencies, its unchecked use threatens to erode foundational principles of fairness, accountability, and human oversight.[54] Across jurisdictions, the absence of clear liability mechanisms has created a legal vacuum that undermines due process and risks miscarriages of justice.[55]
To preserve public confidence in the rule of law, reform must proceed along principled and transparent lines. Courts and legislators must ensure that AI serves justice rather than supplants it. A coherent transnational approach—combining the EU’s binding regulation, the UK’s flexible principles, and the US’s practical sectoral insights—could offer a balanced framework. Only then can the law fulfil its constitutional promise of equality before the law in an algorithmic age.
Bibliography
Cases
R (Bridges) v Chief Constable of South Wales Police [2020] EWCA Civ 1058.
State v Loomis 881 NW 2d 749 (Wis 2016).
Williams v City of Detroit Police Department (US District Court, Eastern District of Michigan, 2020).
Salomon v A Salomon & Co Ltd [1897] AC 22 (HL).
R v Church [1966] 1 QB 59.
R v Goodfellow (1986) 83 Cr App R 23.
R v Larkin [1943] KB 174.
R v Nedrick [1986] 1 WLR 1025.
R v Woollin [1999] 1 AC 82 (HL).
Royal College of Nursing v DHSS [1981] AC 800 (HL).
Legislation, Treaties and Official Documents
Constitutional Reform Act 2005.
Data Protection Act 2018, s 120.
Equality Act 2010.
Human Rights Act 1998.
Regulation (EU) 2016/679 (General Data Protection Regulation) arts 15(1)(h), 23(1)(b).
Council of Europe, European Ethical Charter on the Use of Artificial Intelligence in Judicial Systems (CEPEJ 2018).
European Convention on Human Rights (1950).
European Parliament, Report on Artificial Intelligence in Criminal Justice COM(2021) 206 final.
European Parliament, Artificial Intelligence in Criminal Law and its Use by the Police and Judicial Authorities in Criminal Matters 2020/2016(INI).
Department for Science, Innovation and Technology, A Pro-Innovation Approach to AI Regulation: Policy Paper (March 2023).
Federal Trade Commission, Aiming for Truth, Fairness, and Equity in Your Company’s Use of AI (19 April 2021).
House of Commons Science and Technology Committee, The Work of the Biometrics Commissioner and the Forensic Science Regulator (HC 1970, 2019).
Royal Society, Forensic Science and Digital Evidence (2021).
Law Commission of England and Wales, Artificial Intelligence and the Law: Discussion Paper (31 July 2025).
House of Lords Select Committee on Artificial Intelligence, AI in the UK: Ready, Willing and Able? (HL Paper 100, 2017–19).
House of Lords Liaison Committee, Artificial Intelligence Policy in the UK: “No Room for Complacency” (18 December 2020).
Books
Andrew Ashworth, Principles of Criminal Law (9th edn, OUP 2019).
Vernon Bogdanor, The New British Constitution (Hart Publishing 2009).
Diogo R Galvão Carvalho, Criminal Law and Risk Regulation (Hart Publishing 2020).
Jacob Turner, Robot Rules: Regulating Artificial Intelligence (Palgrave Macmillan 2019).
Massimo Donini & Giovanni Vassalli, in Federica Casarosa and Giovanni Sartor (eds), AI and the Rule of Law (OUP 2024).
Tom Bingham, The Rule of Law (Penguin 2010).
Gabriel Hallevy, When Robots Kill: Artificial Intelligence under Criminal Law (Northeastern University Press 2013).
João Carlos Carvalho, Risk and Criminal Law: Rethinking Criminalisation in Contemporary Society (OUP 2017).
Cathy O’Neil, Weapons of Math Destruction (Crown 2016).
Frank Pasquale, The Black Box Society (Harvard University Press 2015).
Journal Articles
Yasmin Bathaee, ‘The Artificial Intelligence Black Box and the Failure of Intent and Causation’ (2018) 31 Harvard Journal of Law & Technology 922.
Sandra Wachter, Brent Mittelstadt and Luciano Floridi, ‘Transparent, Explainable, and Accountable AI for the Rule of Law’ (2019) 34 Computer Law & Security Review 1.
Danielle Keats Citron, ‘Technological Due Process’ (2008) 85 Washington University Law Review 1249.
Lilian Edwards and Michael Veale, ‘Enslaving the Algorithm? From a “Right to an Explanation” to a “Right to Better Decisions”’ (2017) 16 Duke Law & Technology Review 18.
Julia Dressel and Hany Farid, ‘The Accuracy, Fairness, and Limits of Predicting Recidivism’ (2018) 4 Science Advances 1.
Aziz Z Huq, ‘Racial Equity in Algorithmic Criminal Justice’ (2019) 68 Duke Law Journal 1043.
Andrew Roberts, ‘Machine Evidence and the Problem of Legal Responsibility’ (2020) 40 Oxford Journal of Legal Studies 1.
Beatrice Panattoni, ‘Generative AI and Criminal Law’ (2025) 1 Cambridge Forum on AI: Law and Governance e9.
Francesca Lagioia and Giovanni Sartor, ‘AI Systems Under Criminal Law’ (2020) 8(2) Philosophical Transactions of the Royal Society A 1.
Nicola Lacey, ‘Philosophical Foundations of the Common Law of Criminal Responsibility’ (2001) 27 Oxford Journal of Legal Studies 671.
Stefano Panattoni, ‘Managing Risk in Criminal Law: A Fragile Foundation’ (2024) 12(1) European Journal of Legal Studies 44.
Hal Ashton, ‘Definitions of Intent Suitable for Algorithms’ (2022) 12 Law, Innovation and Technology 87.
Kate Malleson, ‘The Supreme Court of the United Kingdom: Form and Function’ (2011) 14 Legal Ethics 25.
Reports and Online Sources
Brennan Center for Justice, Predictive Policing Explained (2019).
Julia Angwin and others, ‘Machine Bias’ ProPublica (23 May 2016).
Drew Harwell, ‘Federal Study Confirms Racial Bias of Many Facial-Recognition Systems’ Washington Post (19 December 2019).
Associated Press, ‘Louisiana Man Wrongfully Arrested After Facial Recognition Error’ The Guardian (15 January 2022).
Atillah, ‘Ethical Challenges of Generative AI’ (2023) <https://www.ethicalai.org>.
Chakraborty et al, Artificial Intelligence in Clinical Decision-Making (2022).
Council of Europe, ‘AI and Criminal Law’ CDPC Project Report.
[1] Brennan Center for Justice, Predictive Policing Explained (2019) https://www.brennancenter.org accessed 14 November 2025 ; House of Commons Science and Technology Committee, The Work of the Biometrics Commissioner and the Forensic Science Regulator (HC 1970, 2019) ; oyal Society, Forensic Science and Digital Evidence (2021);
for judicial sentencing algorithms: Julia Dressel and Hany Farid, ‘The Accuracy, Fairness, and Limits of Predicting Recidivism’ (2018) 4 Science Advances 1.
[2] Council of Europe, European Ethical Charter on the Use of Artificial Intelligence in Judicial Systems (CEPEJ, 2018); Sandra Wachter, Brent Mittelstadt and Luciano Floridi, ‘Transparent, Explainable and Accountable AI for the Rule of Law’ (2019) 34 Computer Law & Security Review 1.
[3] Sandra Wachter, Brent Mittelstadt and Chris Russell, ‘Why Fairness Cannot Be Automated: Bridging the Gap Between EU Non-Discrimination Law and AI’ (2021) 41 Computer Law & Security Review 1 ; Lilian Edwards and Michael Veale, ‘Slave to the Algorithm? Why a “Right to an Explanation” Is Probably Not the Remedy You Are Looking For’ (2017) 16 Duke Law & Technology Review 18.
[4] Andrew Ashworth and Jeremy Horder, Principles of Criminal Law (9th edn, OUP 2019) chs 3–4; Andrew Roberts, ‘Machine Evidence and the Problem of Legal Responsibility’ (2020) 40 Oxford Journal of Legal Studies 1.
[5] Constitutional Reform Act 2005, ss 1 and 23; Vernon Bogdanor, The New British Constitution (Hart Publishing 2009) 48–55; Kate Malleson, ‘The Supreme Court of the United Kingdom: Form and Function’ (2011) 14 Legal Ethics 25.
[6] Tom Bingham, The Rule of Law (Penguin 2010).
[7] Correctional Offender Management Profiling for Alternative Sanctions (COMPAS).
[8] Julia Angwin and others, ‘Machine Bias: There’s Software Used Across the Country to Predict Future Criminals. And It’s Biased Against Blacks’ ProPublica (23 May 2016) https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing accessed 14 November 2025 ; Danielle Keats Citron, ‘Technological Due Process’ (2008) 85 Washington University Law Review 1249, 1270–1278; Aziz Z Huq, ‘Racial Equity in Algorithmic Criminal Justice’ (2019) 68 Duke Law Journal 1043.
[9] Julia Angwin and others, ‘Machine Bias: There’s Software Used Across the Country to Predict Future Criminals. And It’s Biased Against Blacks’ ProPublica (23 May 2016) https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing accessed 10 November 2025.
[10] Ibid (n 9) ; Danielle Keats Citron, ‘Technological Due Process’ (2008) 85 Washington University Law Review 1249, 1270–1278.
[11] State v Loomis 881 NW 2d 749 (Wis 2016) (Sup Ct)
[12] R (Bridges) v CC South Wales Police [2020] EWCA Civ 1058, [para 201] (PSED breach).
[13] Yasmin Bathaee, ‘The Artificial Intelligence Black Box and the Failure of Intent and Causation’ (2018) 31 Harvard Journal of Law & Technology 922, 927.
[14] Data Protection Act 2018, s 120; Regulation (EU) 2016/679 (General Data Protection Regulation) arts 15(1)(h) and 23(1)(b).
[15] Williams v City of Detroit Police Department (2020) US District Court, Eastern District of Michigan ; Associated Press, ‘Louisiana Man Wrongfully Arrested After Facial Recognition Error’ The Guardian (15 January 2022) https://www.theguardian.com/us-news/2022/jan/15/louisiana-facial-recognition-wrongful-arrest accessed 10 November 2025.
[16] Drew Harwell, ‘Federal Study Confirms Racial Bias of Many Facial-Recognition Systems, Matching Error Rate Up to 100 Times Higher for Darker-Skinned Women’ Washington Post (19 December 2019).
[17] Michael J Allen and Ian Edwards, Criminal Law (15th edn, Oxford University Press 2019).
[18] Ibid.
[19] R v Nedrick [1986] 1 WLR 1025 ; R v Woollin [1999] 1 AC 82 (HL).
[20] Gabriel Hallevy, When Robots Kill: Artificial Intelligence under Criminal Law (Northeastern University Press 2013) ; Francesca Lagioia and Giovanni Sartor, ‘AI Systems Under Criminal Law’ (2020) 8(2) Philosophical Transactions of the Royal Society A 1.
[21] Jannik Zeiser, ‘Owning Decisions: AI Decision-Support and the Attributability-Gap’ (2024) 30(4) Science and Engineering Ethics 27.
[22] Artificial Intelligence Act ; European Commission, Proposal for a Regulation Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) COM(2021) 206 final.
[23] Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024.
[24] Ibid ; Regulation (EU) 2024/1689, art 50.
[25] Ibid (n15) ; Nadia El‑Yaouti, ‘Georgia Man Sues Over Wrongful Arrest Due to Facial Recognition’ Law Commentary (2 October 2023).
[26] Massimo Donini, ‘Massimo Pavarini e la scienza penale. Ovvero, sul valore conoscitivo dell’antimoderno sentimento della compassione applicato allo studio della questione criminale’ (2017) 12(1‑2) Studi sulla Questione Criminale 39.
[27] European Parliament, Resolution of 6 October 2021 on artificial intelligence in criminal law and its use by the police and judicial authorities in criminal matters (2021) P9_TA(2021)0405, para 16.
[28] European Convention on Human Rights (adopted 4 November 1950, entered into force 3 September 1953) art 6.
[29] UK Government, A Pro‑Innovation Approach to AI Regulation: White Paper (GOV.UK, March 2023) 9.
[30] Equality Act 2010 (UK); Human Rights Act 1998 (UK).
[31] Department for Science, Innovation and Technology, ‘UK AI Governance Framework’ (GOV.UK, 2023) https://www.gov.uk/government/publications/uk-ai-governance-framework accessed 15 November 2025.
[32] Federal Trade Commission, Using Artificial Intelligence and Algorithms (FTC, 2023) https://www.ftc.gov/business-guidance/using-artificial-intelligence-algorithms accessed 17 November 2025.
[33] Ibid (n 9).
[34] Sandra Wachter, Brent Mittelstadt and Luciano Floridi, ‘Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation’ (2017) 7 International Data Privacy Law 76, 79
[35] Salomon v A Salomon & Co Ltd [1897] AC 22 (HL) (Lord Halsbury LC).
[36] Ibid.
[37] Prest v Petrodel Resources Ltd [2013] UKSC 34, [2013] 2 AC 415 ; Gilford Motor Co v Horne [1933] Ch 935 (CA).
[38] Donal Nolan and Andrew Robertson (eds), Rights and Private Law (Hart Publishing 2012).
[39] R v Church [1966].
[40] R v Larkin [1943] KB 174.
[41] R v Goodfellow [1986] 83 Cr App R 23.
[42] Heydon, P. (2010). The Construction of Statutes, 5th edn, Lawbook Co., p. 149–150 – on the purpose of the mischief rule to identify legislative intent.
[43] Katyal, N. & Reilly, K. (2020). “Artificial Intelligence and the Law: Challenges to Judicial Authority,” Harvard Journal of Law & Technology, 33(2), 345–378.
[44] Indian Journal of Law and Legal Research (2021) 3(4) IJLLR 112
[45] Andrew Ashworth, Principles of Criminal Law (9th edn, OUP 2019) ch 1; Massimo Donini and Giovanni Vassalli, ‘Human-Centred Principles for the Governance of AI in Justice’ in Federica Casarosa and Giovanni Sartor (eds), AI and the Rule of Law (OUP 2024) 45; Diogo Carvalho, Criminal Law and Risk Regulation (Hart Publishing 2020) 12–15.
[46] Beatrice Panattoni, ‘Generative AI and Criminal Law’ (2025) 1 Cambridge Forum on AI: Law and Governance e9.
[47] Sandra Wachter, Brent Mittelstadt and Luciano Floridi, ‘Transparent, Explainable, and Accountable AI for the Rule of Law’ (2019) 34 Computer Law & Security Review 1, 5–6 ; Danielle Keats Citron, ‘Technological Due Process’ (2008) 85 Washington University Law Review 1249, 1255.
[48] Ibid (n 27).
[49] Ibid (n 47) ; European Parliament, Report on Artificial Intelligence in Criminal Justice (2021) COM(2021) 206 final, 14–16; Lilian Edwards and Michael Veale, ‘Enslaving the Algorithm? From a “Right to an Explanation” to a “Right to Better Decisions”’ (2017) 16 Duke Law & Technology Review 18, 25–27.
[50] Davis v United Kingdom (2000) 30 EHRR 1, emphasising disclosure obligations necessary to secure equality of arms; see also Edwards and Lewis v United Kingdom (2004) 40 EHRR 24.
[51] Ibid (n 49).
[52] European Parliament, Artificial Intelligence Act: Provisional Agreement (2024); see also Joanna Bryson, ‘The Artificial Intelligence Liability Gap’ (2019) 34 AI & Society 1
[53] Giovanni Vassalli and Massimo Donini, ‘Human-Centred Principles for the Governance of AI in Justice’ in Federica Casarosa and Giovanni Sartor (eds), AI and the Rule of Law (Oxford University Press 2024).
[54] Sandra Wachter, Brent Mittelstadt and Luciano Floridi, ‘Transparent, Explainable, and Accountable AI for the Rule of Law’ (2019) 34 Computer Law & Security Review 1, 12–15
[55] Ugo Pagallo, ‘Robots in the Cloud with Law’ (2013) 1(1) Philosophy & Technology 25, 30–33; Jacob Turner, Robot Rules: Regulating Artificial Intelligence (Palgrave Macmillan 2019) ch 6 ; European Parliament, Report on Artificial Intelligence in Criminal Justice (2021) COM(2021) 206 final, 12–15.





