Authored By: Katelynn De Souza
Queen Mary University of London
Abstract:
In the United Kingdom, artificial intelligence is being used with increasing frequency to inform or determine decisions that have legal and social ramifications, raising concerns regarding accountability, transparency and the protection of individual rights. Unlike the European Union, the UK has opted not to implement legislation specific to artificial intelligence, instead relying on existing legal frameworks such as data protection laws, equality legislation, and regulatory supervision, supplemented by non-statutory guidance. This article assesses whether this regulatory choice is adequate in practice when applied to AI driven decision making. It argues that although the UK’s principles based pro innovation approach offers flexibility and adaptability, it leaves significant structural gaps in accountability and protection. Opacity, fragmented oversight, and limited access to redress undermine the effectiveness of existing safeguards. The article concludes that targeted legal and institutional reform, rather than legislation tailored exclusively to artificial intelligence, is necessary to ensure meaningful oversight while preserving regulatory flexibility.
Introduction
Artificial intelligence is now routinely used in the UK to inform or determine decisions that have legal and social ramifications. There is a growing increase in the use of automated systems in sectors such as recruitment screening, credit assessment, welfare administration and public sector risk profiling. Individuals’ access to employment, financial services, public benefits and other opportunities may be directly impacted by decisions generated or influenced by such systems. Effective legal scrutiny is crucial when such decisions are made at a scale where even minor errors or biases can produce widespread harm.
Within this regulatory landscape, the UK has made a deliberate decision not to introduce legislation specific to artificial intelligence. Contrary to the European Union, which has implemented a comprehensive Artificial Intelligence Act, the UK has chosen to regulate AI through existing legal frameworks such as data protection law, equality law and sector specific regulation. This approach is explicitly expressed in the government’s AI Regulation White Paper which frames artificial intelligence as general-purpose technology, with its associated risks contingent upon context and application rather than the technology itself.
The government has rationalised this approach by claiming that AI is advancing rapidly and unevenly across several sectors. This perspective suggests that a singular statute addressing artificial intelligence may become obsolete, overly rigid or inadequately adaptive to evolving concerns. In contrast, flexibility and regulatory discretion are viewed as benefits, enabling current regulators to implement overarching principles in a contextual manner.
However, this regulatory choice gives rise to a fundamental tension. While a framework led by regulators and based on principles may support innovation and adaptability, it also depends heavily on legal regimes that were not designed with artificial intelligence decision making in consideration. Consequently, concerns emerge over the ability of individuals to successfully contest automated decisions, the unambiguous allocation of responsibility for harm and the regulators’ authority and capacity to intervene consistently and effectively.
This article assesses the adequacy of the UK’s existing legal and regulatory frameworks that govern decision making driven by artificial intelligence, focusing on potential deficiencies in accountability and protection.
It contends that although the existing approach priorities flexibility and innovation, it reveals inherent structural deficiencies in enforcement, transparency and access to remedies. As a result, dependence on existing legislation and regulatory oversight alone may be inadequate to mitigate the dangers associated with AI driven decision making in practice.
The United Kingdom’s regulatory philosophy
The UK’s approach to artificial intelligence is predicated on a broader commitment to regulation based on principles and orientated towards innovation. Instead of developing an independent legislative framework for artificial intelligence, the government has chosen to embed AI oversight within existing regulatory regimes with oversight structured around shared principles applied by regulators within their respective sectors. This approach is outlined in the AI Regulation White Paper which highlights principles such as safety, transparency, fairness, accountability and contestability as the foundation of the UK’s approach.
At a conceptual level, this model has considerable appeal. Regulation based on principles provides flexibility when confronted with rapid technological change and reduces the risk of statutory obsolescence.
The framework aims to support proportionate intervention that reflects risks specific to individual sectors by enabling regulators to interpret and implement high level principles within their respective domains. The government has repeatedly emphasised that excessively prescriptive regulations may deter investment and limit innovation, particularly in emerging areas of AI development.
However, the effectiveness of this regulatory approach is largely contingent upon the capacity, coherence and authority of the regulatory framework responsible for its implementation. Principle focused frameworks presume that regulators possess the technical proficiency, institutional resources and enforcement mechanisms to convert abstract principles into effective oversight. The premise is increasingly contested in the context of AI-driven decision making. AI systems are often complex, opaque and implemented across numerous regulatory domains simultaneously. Effective oversight therefore requires not only knowledge specific to particular sectors, but also technical competence and sustained coordination between regulators.
The UK’s dependence on sectoral regulators as the primary means of AI governance raises structural concerns regarding fragmented accountability. There is a risk that no single authority will be clearly responsible for addressing harm where responsibility is distributed across regulators with overlapping but distinct areas of responsibility.
For example, an AI system utilised in recruitment may simultaneously use data protection law, equality law and employment law without any single regulator having a holistic understanding of its operation or impacts. In such circumstances, regulatory inadequacies may arise not due to the absence of safeguards but because responsibility is divided and intervention is uncertain.
The government has sought to address this challenge through coordination mechanisms such as the Digital Regulation Cooperation Forum (DRCF), which brings together regulators including the Information Commissioner’s Office, the Competition and Markets Authority, Ofcom, and the Financial Conduct Authority. The DRCF aims to promote coherence in digital regulation and to facilitate cooperation on emerging technologies, including algorithmic systems. While such coordination is valuable, it does not resolve the underlying limits of a non-statutory, voluntary framework. The DRCF lacks the authority to impose binding standards or to compel aligned enforcement, and its role is explicitly limited to cooperation rather than decision-making.
This exposes a fundamental tension within the UK’s regulatory framework. While flexibility and decentralisation are advocated as advantages, they may also compromise consistent protection and clear accountability when applied to AI-driven decision making. Therefore, the efficacy of this model in practice depends not only on the existence of legislative safeguards but also on their capacity to provide substantial protection within a fragmented regulatory landscape.
Existing legal frameworks governing AI-driven decision making
3.1 Data Protection Law
In the UK, data protection law constitutes the most direct source of legal protection against detrimental AI-driven decision making. The UK GDPR and the Data Protection Act 2018 impose obligations relating to fairness, lawfulness, transparency, and accountability in the processing of personal data. These requirements are applicable to AI systems that process personal data, which is common in situations where decisions are made, including recruiting, credit scoring, and delivering public services.
Article 22 of the UK GDPR is particularly significant. It grants individuals the right to not be subject to decisions based solely on automated processing where such decisions yield legal or similarly consequential effects. The provision includes rights to obtain human intervention, express a view and contest the decision. Article 22 ostensibly provides a strong defence against completely automated decision making that significantly affects individuals.
However, in practice the protective scope of article 22 is limited. The requirement that a decision must be “solely” automated establishes a restrictive threshold that several AI-driven systems are structured to avoid. Organisations often integrate minimal human involvement into decision making processes, despite AI systems generating outputs that are routinely accepted without meaningful scrutiny.
Organisations may contend that Article 22 is inapplicable by framing such processes as involving human oversight, notwithstanding the decisive influence of automated systems on outcomes. These obligations are given domestic legal effect and enforcement mechanisms through the Data Protection Act 2018, which provides the statutory framework for regulatory intervention and individual remedies in cases of unlawful automated processing.
The notion of ‘meaningful human intervention’ further highlights the constraints inherent within the existing framework. Although regulatory guidance underscores the importance of substantive rather than superficial human oversight, the absence of explicit statutory standards allows organizations significant discretion in determining how oversight is implemented. Human review may involve merely endorsing algorithmic outputs with minimal scrutiny, providing limited protection against inaccuracies or biases. Consequently, data protection legislation frequently proves inadequate in providing individuals a realistic opportunity to contest decisions made by artificial intelligence, thereby undermining both procedural fairness and accountability.
Scholarly commentary has emphasized this discrepancy between formal protections and their practical efficacy. Roberts alongside others contend that data protection legislation was not designed to confront the systemic risks posed by complex, adaptive systems, and that reliance on individual rights, such as Article 22, may be inadequate for addressing structural harm.
Where AI-driven decision-making functions at scale, procedural rights invoked on a case-by-case basis may offer limited protection, particularly in cases where affected individuals are unaware that automated systems are impacting decisions concerning them.
3.2 Equality law
Equality law provides a further, though more indirect, framework for regulating AI-driven decision-making. The Equality Act 2010 prohibits direct and indirect discrimination on the basis of protected characteristics and is applicable regardless of whether decisions are made by humans or automated systems. In principle, this means that the implementation of AI does not conflict with existing equality obligations. In cases where an AI system produces discriminatory results, the existing legal framework continues to be formally applicable.
Indirect discrimination provisions are particularly relevant in this context. Numerous AI systems function by detecting statistical correlations or proxy variables that may disproportionately disadvantage protected groups. Even in cases where an algorithm does not explicitly use protected characteristics, variables such as postcode, educational background, or employment history may function as proxies, thereby perpetuating existing social inequalities. In theory, equality law is well-suited to addressing such harms because indirect discrimination does not require discriminatory intent.
In practice, however, equality law encounters significant limitations when implemented in the context of AI-driven decision-making. Claims of indirect discrimination necessitate that claimants establish that a provision, criterion, or practice disadvantages a protected group in a particular manner. Within the framework of opaque or proprietary algorithms, individuals frequently lack access to the data, model architecture, or decision logic necessary to substantiate that disadvantage. This evidential burden is especially pronounced when harm is pervasive rather than attributable to a single, clearly identifiable rule.
Scholars have contended that this evidential asymmetry diminishes the efficacy of equality law within algorithmic contexts. Kristófík observes that probabilistic systems challenge traditional discrimination analysis because outcomes emerge from complex interactions between data and model design rather than from discrete decision rules. Where claimants are unable to understand how decisions are generated, equality law’s reliance on evidence of group disadvantage may offer limited practical protection.
As a result, although equality law retains symbolic significance, its potential to ensure accountability in AI-driven decision-making is constrained. The formal application of regulations does not ensure effective enforcement when claimants lack access to necessary information to challenge decisions or when accountability for discriminatory effects is dispersed among many parties.
Regulatory oversight in practice
The UK’s approach to AI governance significantly depends on oversight and guidance from regulators, in addition to formal legislative obligations. The Information Commissioner’s Office (ICO) is pivotal in this domain, providing guidance on AI and data protection while developing tools like the AI Auditing Framework. These initiatives seek to aid organizations in identifying and mitigating risks associated with AI implementation, especially in relation to fairness, transparency, and accountability.
However, the practical significance of this oversight is limited by its non-binding character. ICO guidance does not establish legally binding obligations and predominantly depends on voluntary compliance. Although the ICO has enforcement authority under data protection laws, its actions within the AI domain are predominantly reactive. Regulatory measures are frequently initiated in response to complaints or prominent failures rather than through proactive, systematic oversight of AI deployment. This reactive approach is inadequate for addressing structural or anticipatory risks, especially in cases where individuals are oblivious that AI systems are affecting decisions concerning them.
The constraints of guidance based oversight are exacerbated by the complexity of AI systems. Effective regulation requires technical proficiency, sustained engagement, and access to organizational procedures and data. Regulators have resource limitations that hinder their ability to examine AI systems comprehensively, particularly when implementation spans numerous sectors simultaneously.
The Digital Regulation Cooperation Forum represents an attempt to address these challenges by improving coordination between regulators with overlapping areas of responsibility. By facilitating cooperation between bodies such as the ICO, CMA, Ofcom, and FCA, the DRCF seeks to promote coherent regulatory responses to digital technologies, including algorithmic decision-making.
However, the institutional design of the DRCF limits its efficacy as a mechanism of accountability. It is a voluntary, non-statutory forum with no power to compel action or resolve regulatory conflicts. Coordination may enhance information exchange and strategic alignment, but it does not replace the need for enforceable oversight. In instances where regulators differ in their stance or fail to prioritize AI-related risks, the DRCF lacks the authority to intervene.
Collectively, these characteristics indicate that regulatory oversight in practice is fragmented and inconsistent. Although guidance and coordination mechanisms are in place, they do not consistently result in effective protection or clear accountability. This gap between formal regulatory objectives and their practical enforcement is fundamental to evaluating the adequacy of the UK’s approach.
Key gaps and challenges
5.1 Opacity and explainability
A significant challenge of AI-driven decision making is lack of transparency. Numerous AI systems, especially those based on machine learning, operate as “black boxes”, rendering their internal reasoning processes difficult to understand. This opacity compromises transparency and limits individuals’ ability to understand the processes by which decisions impacting them are made.
Current legal frameworks struggle to address this issue. Data protection legislation mandates that organisations provide information on automated processing, but these requirements are often satisfied through general descriptions that offer limited insight into outcomes in individual cases. In the absence of substantial explanation, individuals are unable to assess the accuracy, fairness, or contestability of decisions, therefore undermining both accountability and protection.
5.2 Meaningful human intervention
Human involvement is often depicted as a safeguard against automated harm. However, when criteria for meaningful involvement are inadequately defined, human oversight may devolve into a mere formality rather than a substantive action. Current frameworks presume that the engagement of a human decision maker minimizes risk, without sufficiently examining the quality or autonomy of such involvement.
This produces a structural vulnerability. When human evaluation is superficial or too deferential to algorithmic outputs, accountability is diminished rather than enhanced.
Procedural safeguards may be present in theory but may not function effectively in practice.
5.3 Accountability and causation
AI-driven decision-making frequently entails multiple actors, including developers, data providers, and implementing organisations. This dispersion of responsibility complicates conventional legal notions of causation and liability. In instances of harm, it may be uncertain who should be held responsible, thereby compromising accountability.
Roberts alongside others argue that this “distributed agency” creates an accountability gap that existing legal frameworks struggle to address. Where responsibility is fragmented, enforcement may falter even where harm is evident.
5.4 Access to Redress
Even in instances where legal rights are established, individuals may encounter difficulties in obtaining effective remedies. As highlighted by the Ada Lovelace Institute, information asymmetries, evidential burdens, and the systemic nature of many AI-related harms significantly limit the practical enforceability of existing protections. These barriers disproportionately affect individuals with fewer resources or less technical expertise, further weakening the protective capacity of the framework.
The way forward: Targeted Reform
The shortcomings identified in this article do not necessarily justify the implementation of comprehensive legislation specifically targeting artificial intelligence. While the European Union has adopted a broad, risk-based regulatory framework through the Artificial Intelligence Act, replicating a similar model in the UK would represent a significant departure from the principles-based approach underpinning the current regulatory landscape. Concerns about rigidity, legislative obsolescence, and innovation deterrence are not unfounded.
However, the analysis demonstrates that flexibility alone is insufficient where AI-driven decision-making impacts individuals’ rights and access to essential services. Targeted reform offers a balanced alternative that strengthens accountability and protection without abandoning regulatory adaptability.
Firstly, clearer statutory standards are required to enhance the efficacy of current protections, especially with significant human engagement in critical decision-making processes. Secondly, transparency obligations should be strengthened when AI significantly impacts legally or socially significant outcomes, enabling individuals to comprehend and challenge choices more effectively. Thirdly, coordination among regulators should be enhanced by establishing clearer duties to collaborate and share accountability in areas with overlapping regulatory authority.
These reforms directly address the structural weaknesses previously identified. They enhance explainability, mitigate fragmented accountability, and improve access to remedy without imposing regulations directed at specific technologies.
Conclusion:
This article examines to what extent the United Kingdom’s existing legal and regulatory frameworks are adequate to regulate AI-driven decision-making or whether they leave significant gaps in accountability and protection. The analysis indicates that although reliance on existing laws and regulator-led oversight provides a degree of coverage, it is insufficient to ensure effective protection in practice when applied to complex and opaque AI systems.
Data protection legislation provides essential procedural safeguards, however, its limited scope of triggers and insufficiently defined standards restrict its overall protective effectiveness. Equality law, although formally applicable, encounters challenges in effectively addressing algorithmic discrimination due to evidential barriers and structural presumptions that are unsuitable to probabilistic systems. Regulatory oversight, while growing progressively more sophisticated in guidance and coordination, remains fragmented and largely non-coercive.
These shortcomings do not indicate that the UK’s regulatory philosophy is fundamentally flawed. Flexibility and adaptability remain valuable in a rapidly evolving technological landscape. However, flexibility without enforceable accountability risks leaving individuals insufficiently protected against AI-driven harm. Targeted reform focused on accountability, transparency, and regulatory coordination is therefore necessary to ensure meaningful legal scrutiny while preserving the strengths of the UK’s pro-innovation approach.
BIBLIOGRAPHY
Legislation
Data Protection Act 2018.
Equality Act 2010.
Regulation (EU) 2016/679 General Data Protection Regulation (retained EU law).
Regulation (EU) 2024/1689 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act).
Official Publications and Regulatory Documents
Digital Regulation Cooperation Forum, Terms of Reference https://www.drcf.org.uk/siteassets/drcf/home/drcf-terms-of-reference.pdf?v=379416 accessed 14 January 2026.
Information Commissioner’s Office, Explaining Decisions Made with Artificial Intelligence https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/explaining-decisions-made-with-artificial-intelligence/ accessed 14 January 2026.
UK Government, A Pro-Innovation Approach to AI Regulation (White Paper, 2023) https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach/white-paper accessed 14 January 2026.
Reports
Ada Lovelace Institute, Examining the Black Box: Tools for Assessing Algorithmic Systems (2020) https://www.adalovelaceinstitute.org/report/examining-the-black-box-tools-for-assessing-algorithmic-systems/ accessed 14 January 2026.
Ada Lovelace Institute, Regulating AI in the UK (2023) https://www.adalovelaceinstitute.org/report/regulating-ai-in-the-uk/ accessed 14 January 2026.
Journal Articles
Krištofík A, ‘Bias in AI (Supported) Decision Making: Old Problems, New Technologies’ (2025) International Journal for Court Administration https://iacajournal.org/articles/10.36745/ijca.598 accessed 14 January 2026.
Roberts H and others, ‘Artificial Intelligence Regulation in the United Kingdom: A Path to Good Governance and Global Leadership?’ (2023) 12(2) Internet Policy Review
https://policyreview.info/articles/analysis/artificial-intelligence-regulation-united-kingdom-path-good-governance accessed 14 January 2026.
Books and Institutional Publications
Borgesius FZ, Discrimination, Artificial Intelligence and Algorithmic Decision-Making (Council of Europe 2018) https://rm.coe.int/discrimination-artificial-intelligence-and-algorithmic-decision-making/1680925d73 accessed 14 January 2026.





