Home » Blog » Can the Law Keep Up with AI? A Legal Look at Generative Technology in the UK

Can the Law Keep Up with AI? A Legal Look at Generative Technology in the UK

Authored By: Sanjana Mahesh

De Montfort University Dubai

Introduction

Generative Artificial Intelligence (GenAI) systems, exemplified by tools such as ChatGPT,  DALL·E, and GitHub Copilot, have transformed content creation, automation, and decision making. These technologies use large-scale machine learning models trained on enormous  datasets to produce human-like outputs including text, images, music, and computer code.  The speed and scale of GenAI adoption have been extraordinary, permeating diverse sectors  from education and healthcare to law and finance.

However, the law in the United Kingdom currently struggles to adapt to these developments.  Traditional legal frameworks—rooted in human agency, intent, and accountability—face  significant challenges when applied to AI, which operates autonomously and often opaquely.  Central concerns relate to liability for harm caused by AI outputs, intellectual property  ownership of AI-generated content, and ethical governance, particularly in professions such  as law.

This article critically examines the UK’s readiness to regulate GenAI by analysing the current  state of liability law, intellectual property rights, and ethical regulations applicable to AI  tools. It draws on recent case law, statutory provisions, government policy, and academic  commentary to explore the extent to which existing legal principles suffice or require reform.  The article concludes with concrete proposals aimed at ensuring that the UK’s legal system  can keep pace with rapid technological change without compromising fundamental rights and  ethical standards.

  • Liability for Harm Caused by Generative AI
  • The Regulatory Gap and Existing Legal Frameworks

Currently, the UK lacks a comprehensive statutory framework that explicitly governs liability  for AI systems, including generative AI. While the European Union is progressing with the AI  Act and AI Liability Directive, the UK has adopted a lighter-touch, “pro-innovation” regulatory stance post-Brexit, relying heavily on existing laws such as tort, contract,  consumer protection, and data protection statutes.

This regulatory gap leaves uncertainty about who is accountable when AI systems cause  harm. Unlike human actors, AI lacks legal personhood, creating complex questions about  responsibility allocation between developers, deployers, and end-users.

Tort Law: Challenges of Duty and Causation

In English tort law, negligence requires proof that a defendant owed a duty of care, breached  that duty, and caused foreseeable damage to the claimant. However, applying these principles  to generative AI is fraught with difficulty.

Generative AI systems, such as large language models, operate as “black boxes” where their  decision-making processes are largely inscrutable even to their creators. Establishing  causation—that a specific AI output caused the claimant’s harm—is challenging because AI  models generate outputs probabilistically based on training data rather than deterministic  reasoning.

For example, if an AI chatbot provides incorrect legal advice causing financial loss,  determining whether the software developer, the deploying company, or the user is liable  under negligence is legally unsettled. The courts have yet to clarify how tort principles extend  to AI-related harm, but academic commentators warn of a potential accountability gap.

Product Liability and the Definition of “Product”

The Consumer Protection Act 1987 imposes strict liability on producers for defective  products that cause injury or property damage. Yet whether generative AI software qualifies  as a “product” under this Act remains ambiguous.

The Act implements the EU Product Liability Directive, which traditionally targets tangible  goods. However, with AI software often delivered as cloud-based services updated  continuously, it is unclear whether it fits within the product definition. If AI software is  excluded, injured parties must rely on negligence or contract law, which have higher proof  requirements.

The Law Commission has suggested extending strict product liability to AI systems in certain  contexts to provide clearer consumer protections, but no legislative reforms have yet been  enacted.

Contractual Liability and Terms of Use

Most AI tools are licensed under detailed contracts or terms of use, which often contain broad  disclaimers limiting liability for inaccuracies or harm.

Where contracts are involved, remedies for defective or harmful AI outputs may be pursued  under breach of contract or misrepresentation claims. The Consumer Rights Act 2015 implies terms of satisfactory quality and fitness for purpose for goods and services, which  may apply to AI offerings. However, the enforceability of limitation clauses is tested by the  Unfair Contract Terms Act 1977, which demands that disclaimers be reasonable.

Given the novelty of AI, courts may scrutinise contracts to ensure users are not unfairly  deprived of remedies, particularly for harms beyond mere disappointment or inconvenience.

Data Protection and Automated Decision-Making

AI systems processing personal data are subject to the UK General Data Protection  Regulation (UK GDPR) and the Data Protection Act 2018. Article 22 of the UK GDPR  restricts fully automated decisions producing legal or similarly significant effects on  individuals without human intervention.

If a generative AI system produces decisions or recommendations that adversely affect  individuals (for example, denying loans or employment), the data controller must provide  rights to explanation, contestation, and human review. Failure to comply can lead to  enforcement action by the Information Commissioner’s Office (ICO).

Notably, the ICO has sanctioned companies like Clearview AI for unlawful biometric data  processing, demonstrating regulatory willingness to enforce data protection standards against  AI misuse.

Liability for Algorithmic Discrimination

The Equality Act 2010 prohibits discrimination on grounds such as race, gender, or  disability. If AI outputs result in discriminatory impacts—for instance, biased recruitment  recommendations—affected individuals may seek remedies under discrimination law.

Recent judicial scrutiny of AI-enabled facial recognition technology under human rights law  (notably, Bridges v Chief Constable of South Wales Police) has highlighted the need for  fairness and transparency in AI deployment, especially by public bodies.

  • Intellectual Property Issues in Generative AI
  • Copyright and Training Data Use

Generative AI models are trained on vast datasets, including copyrighted materials such as  books, articles, images, and music. The UK’s Copyright, Designs and Patents Act 1988  (CDPA) grants authors exclusive rights over reproduction and adaptation of their works.

A pivotal legal question is whether training an AI model constitutes an infringement through  unauthorised copying. Although training may not involve storing exact copies, courts and  academics debate whether “copying” under section 17 CDPA occurs when protected material  is ingested and algorithmically analysed.

The UK government’s initial proposal to introduce a statutory “text and data mining” (TDM)  exception allowing AI training on copyrighted works was withdrawn following opposition  from rights-holders.

The ongoing UK litigation, notably Getty Images v Stability AI, challenges the legality of  using copyrighted images for training without consent. This case may set critical precedents  on the balance between innovation and copyright protection.

Ownership and Authorship of AI-Generated Content

Section 9(3) CDPA addresses computer-generated works, defining the “author” as the person  who undertakes the arrangements necessary for creation. This provision predates modern AI  and provides ambiguous guidance for GenAI outputs.

If a human uses AI to generate text or images by inputting a prompt, it is uncertain whether  they qualify as authorship holders with copyright protection or whether outputs are  unprotected.

The UK Intellectual Property Office (UKIPO) has acknowledged this uncertainty and  suggests that copyright may not subsist in fully autonomous machine-generated works  lacking meaningful human input.

This ambiguity complicates commercial exploitation of AI-generated content and affects  licensing, enforcement, and investment decisions.

Patentability and Inventorship

In patent law, only a natural person may be named as an inventor under the Patents Act  1977. The UK Court of Appeal confirmed this in Thaler v Comptroller-General of Patents concerning the AI system DABUS.

This ruling denies patent protection for AI-generated inventions unless a human is involved  in inventive steps, preserving traditional patent concepts but potentially discouraging AI  innovation.

  • Ethical Regulation and Professional Implications
  • The UK’s Principles-Based Regulatory Model

The UK government’s 2023 White Paper, A Pro-Innovation Approach to AI Regulation,  promotes a decentralised, principles-based system relying on existing regulators (ICO, CMA,  FCA, etc.) to enforce AI-related rules within their sectors.

The framework emphasises five principles: safety, transparency, fairness, accountability, and  contestability. However, this approach has been criticised for lacking binding force, legal  clarity, and enforcement powers.

Professional bodies, including the Law Society and Bar Council, have called for clearer,  binding codes of practice to guide ethical AI use.

AI Use in Legal Practice

AI adoption in legal services has accelerated, with tools assisting in document review, legal  research, and drafting. While efficiency gains are evident, risks include AI hallucinations,  generation of fabricated case law, and ethical dilemmas about competence and confidentiality.

Recent US cases sanctioning lawyers for submitting AI-generated false legal citations  underscore the stakes. UK regulators like the Solicitors Regulation Authority (SRA) are  developing guidance emphasising that lawyers retain responsibility for verifying AI outputs  and maintaining client confidentiality.

Human Rights and Public Sector AI

Judicial decisions like Bridges v Chief Constable of South Wales Police highlight privacy and  human rights risks posed by AI surveillance technologies. The decision confirms that  automated decision-making by public authorities requires rigorous justification to comply with rights under the European Convention on Human Rights (ECHR) incorporated  through the Human Rights Act 1998.

These principles extend to AI in public administration, emphasising transparency,  accountability, and the right to human review.

Recommendations for Reform

To ensure UK law keeps pace with GenAI, a combination of legislative, regulatory, and  professional reforms is necessary:

  1. AI-Specific Liability Legislation: Introduce a statutory framework clarifying liability for AI-generated harm, including principles for causation and shared responsibility among developers, deployers, and users.
  2. Expand Product Liability: Amend the Consumer Protection Act 1987 to explicitly include software and AI systems as products for strict liability purposes.
  3. Clarify IP Law: Reform the CDPA to define copyright ownership and protection for AI-generated works clearly, and establish statutory TDM exceptions with fair licensing schemes to balance rights-holders and innovation.
  4. Regulatory Guidance and Codes: Mandate binding professional codes of conduct on AI use in regulated professions, developed in consultation with stakeholders.
  5. Empower Regulators: Provide ICO, CMA, FCA, and others with enhanced powers and resources to audit AI systems, enforce compliance, and impose sanctions where necessary.
  6. Transparency and Disclosure: Require organisations deploying AI to disclose its use, especially in decision-making affecting individuals, to promote contestability and trust.

Conclusion

Generative AI presents both opportunities and risks that challenge existing UK legal  frameworks. Current liability doctrines, IP laws, and professional regulations only partially  address the unique attributes of AI, such as opacity, autonomy, and scale.

Without targeted reform, accountability gaps may widen, intellectual property rights may  remain uncertain, and ethical standards may falter—potentially undermining public trust and  innovation.

To keep pace, the UK must develop clear, coherent, and enforceable legal regimes that  protect rights while fostering technological progress. Only then can the law not just keep up  with AI, but lead its responsible development in society’s best interest.

Bibliography

Primary Sources

Legislation

  • Consumer Protection Act 1987
  • Copyright, Designs and Patents Act 1988
  • Data Protection Act 2018
  • Equality Act 2010
  • Human Rights Act 1998
  • Patents Act 1977
  • UK General Data Protection Regulation
  • Unfair Contract Terms Act 1977

Cases

Bridges v Chief Constable of South Wales Police [2020] EWCA Civ 1058 ∙ Thaler v Comptroller-General of Patents [2021] EWCA Civ 1374

Secondary Sources

Books and Articles

  • E Rosati, ‘Infringing AI: Liability for AI-Generated Outputs’ (2024) EJRR
  • J Kingston, ‘Artificial Intelligence and Legal Liability’ (2018) arXiv  https://arxiv.org/abs/1802.07782 accessed 4 July 2025
  • N Noto La Diega, ‘Artificial Intelligence and the UK Legal System’ (2023) Oxford IP  Journal

Reports and Guidance

  • Bar Council, ‘AI Guidance for the Bar’ (2024)
  • HM Government, A Pro-Innovation Approach to AI Regulation (2023)  https://www.gov.uk/government/publications/a-pro-innovation-approach-to-ai regulation accessed 4 July 2025
  • LCO, AI and Data Protection Guidance (2023)
  • Law Society, ‘Generative AI – The Essentials’ (2024)
  • UK Intellectual Property Office, AI and Copyright Consultation Response (2023) News and Media
  • Law Society Gazette, ‘AI Tools and the Practice of Law’ (2024)
  • Reuters, ‘Lawyers Sanctioned for Using Fake AI Case Law’ (2023) Additional References (from footnotes)

Cases

Donoghue v Stevenson [1932] AC 562 (HL)

Legislation

  • Consumer Rights Act 2015
  • Copyright, Designs and Patents Act 1988, s 1(1) and s 9(3)
  • Patents Act 1977, s 13(2)
  •  UK GDPR, Article 22

Reports and Guidance

  • Law Commission, Automated Vehicles: Liability and Insurance (Law Com No 427,  2021)
  • UK Government, AI and Intellectual Property Consultation (2021)
  • Solicitors Regulation Authority, ‘Guidance on AI Use in Legal Practice’ (2024) Other References
  • S Foster, ‘Product Liability and Software’ (2022) 38 Journal of Consumer Law 19 ∙ Terms and conditions of OpenAI API

 

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top