Authored By: Favour A-Matthew
University of the People
Abstract
Artificial intelligence has moved rapidly from a peripheral technological tool to a central driver of corporate decision-making. Boards increasingly rely on algorithmic systems to inform recruitment, credit scoring, compliance monitoring, fraud detection, and strategic planning. However, the speed of adoption has outpaced the development of governance structures capable of managing the legal risks these systems generate. This article examines artificial intelligence not as a technological innovation, but as a source of boardroom risk with direct implications for directors’ duties, regulatory exposure, and corporate accountability. Focusing on the UK legal framework and drawing comparative insights from the European Union and the United States, it evaluates whether existing standards of oversight, care, and disclosure remain adequate in an era of algorithmic opacity. It argues that while corporate law remains formally technology-neutral, the reliance on opaque AI systems exposes structural weaknesses in contemporary governance models and necessitates a recalibration of what constitutes reasonable care and informed oversight. Furthermore, the article highlights reputational, insurance, and commercial considerations, illustrating the multidimensional impact of AI on boardroom accountability.
Introduction
Boards across the United Kingdom are authorising the deployment of artificial intelligence at an unprecedented rate. Algorithmic systems now inform decisions that were once firmly within human control, including employee recruitment, creditworthiness assessments, pricing strategies, fraud detection, and regulatory compliance. In many organisations, these systems operate continuously and at scale, producing outcomes that influence corporate performance long before they attract human scrutiny. While such systems promise efficiency and competitive advantage, they also introduce new forms of legal exposure that traditional governance frameworks are ill-equipped to address.
The challenge is not that artificial intelligence makes decisions, but that it does so in ways that are often opaque, probabilistic, and difficult to interrogate. Many commercially deployed systems rely on machine-learning models whose internal logic cannot be readily explained, even those responsible for their implementation. When such systems generate discriminatory outcomes, regulatory breaches, or financial harm, liability does not attach to the technology itself; it reverts to the company and, ultimately, to its directors. The systemic nature of these harms is significant: unlike isolated human error, algorithmic decisions can propagate at scale, affecting thousands of stakeholders simultaneously.
This development exposes tension at the heart of corporate governance. Directors are expected to pursue innovation and efficiency to promote long-term corporate success. At the same time, they remain subject to duties of care, skill, and independent judgment that presuppose a meaningful understanding of the decisions being taken. Artificial intelligence disrupts this assumption. Where decision-making is delegated to systems that cannot be fully explained, the boundary between responsible reliance and negligent abdication becomes increasingly blurred. The iterative and self-learning nature of AI complicates matters further, as models may evolve over time, generating outcomes that diverge from initial expectations without human detection.
UK company law does not distinguish between human and algorithmic decision-makers. The Companies Act 2006 is technologically neutral, offering no explicit guidance on the governance of automated systems. Yet directors operate within an expanding regulatory environment that includes data protection law, financial services regulation, and consumer protection regimes, all of which increasingly intersect with AI deployment. This raises a critical governance question: can existing legal duties accommodate algorithmic decision-making without recalibrating the standard of oversight expected of corporate boards?
This article argues that artificial intelligence should be understood not merely as a technical asset, but as a material boardroom risk. By examining directors’ duties under UK law, regulatory guidance, and emerging comparative approaches in the European Union and the United States, it assesses whether current governance frameworks remain fit for purpose in an era of algorithmic decision-making. It further explores reputational, insurance, and indemnity considerations, emphasising that the risks extend beyond legal exposure to encompass public trust, investor confidence, and commercial viability.
Directors’ Duties and the Legal Architecture of Oversight
The legal foundation of board-level responsibility in the UK is set out in the Companies Act 2006. Section 172 requires directors to act in good faith to promote the success of the company for the benefit of its members as a whole, while having regard to wider stakeholder considerations. Although the duty is framed subjectively, it does not insulate directors from accountability where decisions are taken without adequate understanding of material risks. Where artificial intelligence plays a central role in shaping corporate outcomes, reliance on such systems forms part of the decision-making process for which directors remain responsible.¹
More directly relevant is section 174, which imposes a duty to exercise reasonable care, skill, and diligence. This duty is assessed through a hybrid standard combining objective expectations with the director’s actual knowledge and experience.² Directors are therefore required not only to meet baseline standards of competence, but also to deploy any specialised expertise they possess. In the context of AI, the s174 hybrid standard implies that directors must possess sufficient understanding to oversee algorithmic decision-making. Where a director already has data or IT expertise, the objective standard of care is elevated, requiring them to apply their knowledge proactively to ensure AI systems are reliable, transparent, and aligned with legal and ethical obligations. Directors must ensure that their understanding of risk is not superficial and that they maintain the ability to challenge assumptions and outcomes, particularly where models are opaque.
Artificial intelligence also intersects with data protection law, particularly where automated systems produce legal or similarly significant effects on individuals. Article 22 of the UK GDPR restricts decision-making based solely on automated processing and mandates safeguards such as meaningful human intervention and transparency.³ Compliance with these requirements cannot be delegated entirely to technical teams. Decisions about system design, oversight, and deployment engage governance choices that fall squarely within board-level responsibility. Directors are accountable not only for implementation but also for ensuring that systems operate in accordance with legal and ethical norms.
Regulatory guidance reinforces this interpretation. The Information Commissioner’s Office, the Financial Conduct Authority, and the Competition and Markets Authority have increasingly emphasised organisational accountability in their approach to AI governance.⁴ While such guidance does not have the force of statute, it contributes to the evolving benchmark against which directors’ conduct may be assessed in regulatory investigations and enforcement actions. Failure to incorporate guidance into board-level oversight could be interpreted as negligent governance, particularly where regulatory expectations of AI transparency and accountability are clearly articulated.
Oversight, Delegation, and the Standard of Care
Although UK courts have not yet addressed artificial intelligence directly in the context of directors’ duties, established principles of oversight and delegation remain instructive. Judicial authority has consistently rejected attempts to dilute responsibility through delegation where risks were foreseeable and appropriate safeguards were absent.⁵ Directors are entitled to rely on others, but such reliance must be reasonable and informed. This principle mirrors the reasoning in Re Barings plc (No 5), where directors failed to challenge the firm’s internal structures, resulting in catastrophic losses. Similarly, a modern board that does not scrutinise an opaque AI vendor’s system risks delegating decision-making without meaningful oversight, exposing itself to analogous liability. Boards must not mistake delegation for abdication; proper oversight requires understanding both the capabilities and limitations of any system relied upon.
This principle acquires particular significance where AI systems are treated as neutral or objective decision-makers. The apparent autonomy of algorithmic systems can obscure underlying assumptions, biases, or data limitations. Where such systems are deployed without adequate testing, monitoring, or understanding of their limitations, failures are more accurately characterised as governance failures rather than technical malfunctions. Algorithmic models often encode human or systemic biases, producing outcomes that may be legal, financial, or reputationally damaging. Directors are expected to anticipate and mitigate such risks proactively.
Regulatory enforcement trends reflect this approach. Recent investigations have focused less on algorithmic error itself and more on whether organisations conducted appropriate risk assessments, maintained effective oversight mechanisms, and ensured accountability for automated decisions. Liability is increasingly framed around organisational processes rather than technological outcomes. The legal question is not whether an algorithm erred, but whether the company exercised reasonable care in adopting and supervising it.
Artificial intelligence therefore intensifies existing legal principles rather than displacing them. It magnifies the consequences of poor oversight and exposes weaknesses in governance structures that rely on assumptions of transparency and explainability. The increasing use of AI also challenges traditional assumptions about foreseeability in corporate risk management. Algorithmic systems are often deployed precisely because they are capable of identifying patterns beyond human perception. However, this predictive capacity can complicate assessments of foreseeability when harm occurs. Where risks are identified internally through testing, model validation, or prior incidents, a failure to act on such signals may strengthen the argument that subsequent harm was reasonably foreseeable. In this sense, artificial intelligence may raise rather than lower the standard of care expected of directors by expanding the range of risks that can be anticipated and mitigated.
Moreover, the iterative nature of many AI systems means that risk profiles evolve over time. A system that performs lawfully at the point of deployment may later generate problematic outcomes as data inputs change or models retrain. Reasonable oversight therefore cannot be static. Directors who treat AI governance as a one-off approval exercise may struggle to demonstrate ongoing diligence where harms emerge months or years after initial deployment.
Algorithmic Opacity and the Limits of Board Accountability
The most profound challenge posed by artificial intelligence is opacity. Directors are required to exercise independent judgment, yet many AI systems operate as “black boxes” whose internal logic cannot be readily interrogated. This creates a structural tension within corporate governance frameworks, which assume that meaningful oversight is both possible and practicable.
Reliance on third-party vendors further complicates accountability. While outsourcing development and deployment may reduce operational burdens, it does not transfer legal responsibility. Directors remain accountable for the systems their companies rely upon, even where contractual arrangements limit access to proprietary models or data. In some cases, dependence on opaque vendor systems may heighten exposure by constraining the board’s ability to demonstrate reasonable diligence.
There is also a growing argument that AI-related risks should be treated as material for disclosure purposes. Where algorithmic systems play a significant role in revenue generation, compliance, or risk management, their limitations may be relevant to investors’ assessment of long-term value. Failure to acknowledge such risks could undermine investor confidence and expose companies to claims of misleading disclosure. Comparisons between human and algorithmic error are often misleading: human decision-makers err unpredictably but transparently, whereas algorithmic systems err systematically and at scale.
This difficulty is compounded by the growing use of AI systems in strategic rather than purely operational contexts. Algorithmic tools increasingly inform decisions about market entry, pricing optimisation, mergers and acquisitions, and long-term forecasting. Where such systems influence strategic judgment, opacity becomes particularly problematic. Directors are required to exercise independent judgment on matters of long-term consequence, yet reliance on opaque systems risks converting judgment into deference. The distinction between assistance and substitution is therefore critical. While AI may support human decision-making, overreliance on algorithmic outputs risks eroding the deliberative role of the board.
Insurance, Indemnity, and Reputational Exposure
Beyond regulatory enforcement, artificial intelligence introduces significant insurance and indemnity considerations. Insurers are increasingly scrutinising AI-related risks when underwriting directors’ and officers’ liability policies. Some policies now contain exclusions for losses arising from algorithmic decision-making, particularly where governance failures are evident. This has tangible commercial consequences: boards may face higher premiums or reduced coverage if AI oversight is inadequate.
Reputational risk is also acute. Regulatory investigations into algorithmic discrimination, unfair consumer outcomes, or opaque decision-making processes often attract significant public scrutiny. Even where enforcement action does not result in financial penalties, reputational impact may affect investor confidence, customer trust, and employee morale. Embedding ethical and compliance considerations into AI governance frameworks becomes not merely prudent, but essential to sustaining long-term corporate trust.
Indemnification arrangements offer limited protection. Statutory restrictions and public policy considerations constrain the extent to which companies can indemnify directors against regulatory penalties or findings of breach of duty. Effective AI governance therefore becomes a mechanism for protecting directors’ personal exposure as well as corporate value.
Comparative Regulatory Approaches
The European Union’s Artificial Intelligence Act represents the most comprehensive attempt to regulate AI to date. Adopting a risk-based framework, it imposes heightened obligations on high-risk systems, including governance, transparency, and human oversight requirements.⁶ Although the UK is no longer bound by EU legislation, the Act’s extraterritorial reach and normative influence remain significant for UK companies operating internationally.
The UK has adopted a pro-innovation, sector-led regulatory approach. While this may encourage technological development, it places greater reliance on corporate self-governance and increases uncertainty for directors seeking clear compliance benchmarks. In contrast, the United States emphasises disclosure and investor protection, with the SEC highlighting that misleading statements regarding AI capabilities or risk management may attract enforcement action where investor reliance is foreseeable.⁷ Across jurisdictions, a consistent theme emerges: regulators are less concerned with whether AI is used, and more concerned with how it is governed.
Reframing Reasonable Care in an Automated World
Artificial intelligence does not require a wholesale reform of corporate law, but it does necessitate recalibration. Directors are not expected to become technologists, but they are expected to ensure governance structures capable of translating technical complexity into accountable decision-making. Documentation assumes heightened importance: risk assessments, board deliberations, and monitoring processes must be clearly evidenced to demonstrate due diligence.
Practical reforms may include dedicated board-level risk or technology committees, mandatory algorithmic impact assessments, clearer internal accountability for AI oversight, regular audits, and meaningful human review mechanisms. Such measures help bridge the gap between delegation and responsibility, ensuring that directors can meet duties of care even when technical expertise is limited.
Ultimately, the concept of reasonable care must evolve. In an environment where algorithmic decisions can produce systemic harm, reasonable diligence may require stronger institutional safeguards rather than deeper technical knowledge. The law’s task is not to inhibit innovation but to ensure that it remains anchored to responsibility.
Recommendations / Way Forward
- Establish a dedicated board-level AI oversight committee to monitor deployment, escalate risks, and ensure accountability.
- Implement mandatory Algorithmic Impact Assessments (AIA) prior to system deployment, evaluating bias, data quality, and regulatory risk.⁸
- Maintain robust documentation and audit trails to evidence deliberations, risk assessments, and system testing results.⁹
- Integrate ethical and compliance considerations into AI governance frameworks, considering stakeholder impact consistent with section 172.¹⁰
- Commit to continuous monitoring and education, providing directors with training on AI fundamentals and ongoing scenario-based risk reviews.¹¹
These steps allow boards to translate abstract legal duties into actionable governance practices, enhancing strategic decision-making, corporate reputation, and regulatory compliance.
Conclusion
Artificial intelligence will not replace directors, but it will redefine what competent directorship requires. As algorithmic systems assume a greater role in corporate decision-making, legal and commercial risks associated with their use become increasingly difficult to ignore. Existing governance frameworks remain relevant, but only if adapted to the realities of opacity, scale, and delegation inherent in AI deployment. Treating artificial intelligence as a boardroom risk rather than a technical novelty is essential to preserving corporate accountability and stakeholder trust.
OSCOLA Footnotes
- Companies Act 2006, s 172.
- Companies Act 2006, s 174.
- UK GDPR, art 22.
- Information Commissioner’s Office, Guidance on AI and Data Protection (15 March 2023) https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ accessed 14 December 2025.
- Re Barings plc (No 5) [1999] 1 BCLC 433 (Ch).
- Regulation (EU) 2024/1689 of the European Parliament and of the Council (Artificial Intelligence Act).
- US Securities and Exchange Commission, SEC Charges Two Investment Advisers with Making False and Misleading Statements About Their Use of Artificial Intelligence, Press Release 2024‑36, 18 March 2024 https://www.sec.gov/news/press-release/2024-36 accessed 14 December 2025.
- Companies Act 2006, s 172 (for stakeholder considerations in AI impact assessments).
- Companies Act 2006, s 174 (for board documentation and audit trails).
- Information Commissioner’s Office, Guidance on AI and Data Protection (2023) (ethical/compliance considerations).
- US Securities and Exchange Commission, SEC Charges Two Investment Advisers… (ongoing monitoring and disclosure obligations).





