Home » Blog » The Future of Artificial Intelligence Regulation in the United Kingdom

The Future of Artificial Intelligence Regulation in the United Kingdom

Authored By: Hayley Smyth

The Open University

Abstract

This article examines the current and future regulatory landscape of Artificial Intelligence in the United Kingdom (UK). It makes comparisons to the regulatory landscapes of other jurisdictions, including the European Union (EU) and the United States (US), in the context of AI.

Key Words: Artificial Intelligence, regulatory, technology, Parliament, copyright.

Introduction

To date, the UK does not have any general legislation covering the development, deployment, and use of Artificial Intelligence. It is no secret that, as a globe, we are currently experiencing an ‘AI boom’. Legislation on such matters is important to provide clarity to businesses and AI developers, encourage public trust in AI, ensure safety for our society and its individuals, and maintain the UK’s position as a global leader of AI. This is important because the UK’s AI market boasted an impressive £72.3bn value in 2024, making it the third largest AI market in the world. On the contrary, it is paramount that the UK continues to encourage innovation, which is difficult under tight regulations, and many of the world’s AI developers and companies have spoken out against AI regulation, which will be explored in this article. What is interesting is that the AI Regulation Bill2 has returned to the Houses of Parliament after being tabled just before the 2024 General Election. This article will outline the regulatory framework set out by the AI Regulation Bill and discuss it in the context of how likely the UK government is to support it and what it would mean for the future of AI in the UK.

The Artificial Intelligence (Regulation) Bill [HL]

The AI Regulation Bill was originally tabled during the 2023-2024 Parliamentary Session; however, it was reintroduced in the House of Lords on the 4th of March 2025 and is currently in progress at the second reading. It is described as “a Bill to make provision for the regulation of Artificial Intelligence; and for connected purposes”. The re-emergence of the Bill implies that policymakers’ concern for the lack of AI controls in the UK has not disappeared and may very well be growing.

The Creation of the AI Authority

At its core, the Bill intends to establish a body noted as the AI Authority, to serve the functions of monitoring and assessing the implementation of the principles outlined in Section 2 and to assess the extent to which the principles support innovation. Section 2 provides that the regulation of AI should deliver security, transparency, fairness, accountability, and contestability. It also covers the main principles surrounding AI, set out in the government’s White Paper from 20233, which acts as a non-statutory regulatory framework for AI in the UK.

The AI Authority would also be tasked with monitoring the economic risks arising from AI in the UK, consulting the AI industry to form a response to emerging AI trends, supporting sandbox initiatives to help innovators get new AI technology to the market, accrediting the appropriate independent AI auditors, providing education to give clarity to businesses, and promoting interoperability with international regulatory frameworks. As suggested by The Alan Turing Institute4, the latter is exceptionally crucial to fostering innovation of AI, as inconsistencies in regulations could place a burden on UK businesses as they will face higher compliance costs. It may also cause the UK to be less attractive to AI developers and technology companies wishing to undertake business in the UK. This is a cause for concern when considering start-ups and medium-sized companies because they may be more likely to lack the resources needed to navigate distinct compliance rules across jurisdictions, making it difficult for their products to reach other markets. As a result, businesses that possess the resources to comply with all of the necessary regulations surrounding AI are inclined to have an ‘unfair advantage’5.

The UK Government’s Current Position

The UK government’s attitude towards controlling AI is a non-statutory principles-based approach, using the framework set out by its White Paper, A pro-innovation approach to AI regulation, which never became legislation. Instead, the government wishes to focus on promoting innovation and leading the world in this sector. A further display of this is at the AI Action Summit (February 2025) in Paris, France, where both the UK and the US did not become signatories to the Inclusive and Sustainable AI declaration.6 The UK’s approach is similar to the US’s stance, as according to BBC News (2025), US Vice President JD Vance stated in Paris that overregulation may “kill a transformative industry just as it’s taking off”.7 BBC News also noted that the UK’s move has been welcomed by some, such as UKAI which is a trade association for the AI sector in the country, but criticised by others. For example, the head of AI at Full Fact, which is a fact-checking organisation, stated that not signing the declaration puts the UK’s position as a world leader for safe AI innovation at risk.

It is worth mentioning, however, that the UK became a signatory to other initiatives regarding AI at the Paris summit, which highlights that the UK government is still committed to championing AI across the globe and that it isn’t completely against statutory AI controls. For example, the UK became a signatory to the Building Trust in AI (2025) report, which advocates for a ‘risk-based approach’ in addressing cybersecurity challenges regarding AI8, which reflects that the UK government is still aiming for some controls, despite being selective with its engagements. As a result, this creates uncertainty, and whether or not the UK will legislate on such topics remains to be seen.

Compared to the European Union and the US

The UK’s regulatory landscape offers a different approach to the EU. Last year saw the EU introduce its first comprehensive piece of general legislation on AI, the AI Act 20249. The Act entered into force in August 2024 and becomes fully applicable on the 2nd of August 2026. It appears that the UK wishes to encourage innovation, which may be at risk if strict controls are placed on the development, use and deployment of AI, as some of the bigger players of the ‘AI boom’ have made their dissatisfaction known with some of the regulations and proposals that have been placed in different jurisdictions, especially within the EU.

For example, Meta has decided not to sign the EU’s General-Purpose AI Code of Practice. This is a voluntary tool that can be signed by AI providers to illustrate their compliance with the EU AI Act by adhering to the Code of Practice.10 Meanwhile, according to the Financial Times, multiple companies such as Airbus have signed an open letter urging the EU to place a two-year pause on the Code as it would threaten the competitiveness of the AI sector.11 If the UK follows in the EU’s footsteps, it may find itself surrounded by backlash from the AI and technology industry, and the UK’s presence as a global leader of AI may become watered down. This is because companies may decide to focus their efforts elsewhere in jurisdictions where AI regulations are more lenient, for example, the US. The US has limited federal legislation regulating AI, and instead relies on different guidelines as well as proposed frameworks to help guide AI developers and businesses. However, the US also has different AI regulation regimes in different states, creating uncertainty, and firms will have to shift their efforts and resources to focus on compliance with state rules. What the UK should try to avoid is a fragmented regulatory landscape that is similar to the US, as this will hinder the progress of AI, as well as avoiding tight regulations, which will deter AI developers from developing and deploying AI systems in the UK.

The Creative Industry

The Data (Use and Access) Act 202512 received Royal Assent on the 19th of June this year, despite receiving strong opposition from the creative industries and even being defeated by the House of Lords over amendments covering AI and copyright. One particular rejected amendment required the government to “publish a draft Bill containing legislative proposals” over transparency towards copyright owners regarding the use of their copyright material as data inputs for AI models13. The government has faced increased pressure from the creative industry to legislate on copyright regarding AI, most notably the silent album, ‘Is This What We Want?’, by ‘Various Artists’, which was released on the 25th of February 2025. Although the government agreed to publish a report as required by Section 136 of the Act regarding the use of copyright materials in the development of AI products.

Overall, the rejection of the amendment stands to show that the government is reluctant to legislate on AI, whilst also agreeing to consider proposals later down the line. Perhaps the government is taking a slow and steady approach before it considers legislating on AI instead of racing to establish controls and protections. In the meantime, those working in the creative industries are left vulnerable to AI, and there is a distinct lack of certainty for all involved, including the creative industry, AI developers, and technology businesses, as the government has not signalled whether or not it will grant protections and controls on copyright materials used to train AI.

This lack of clarity has extended itself into the ongoing landmark case, Getty Images v Stability AI.14 In this case, Getty Images has claimed that Stability AI has possession of an AI article in the UK, which Stability AI has reason to believe “is an infringing copy of Getty’s copyright works, contrary to Section of the Copyright, Designs and Patents Act 1988.“15

The outcome of Getty’s claim may influence the UK Government’s next actions. This is down to the fact that the judgement will offer the government a judicial interpretation, and if a ruling in favour of the creative industry occurs, the Government could take this into account and establish clearer rules surrounding copyright AI. Alternatively, the ruling may favour the UK’s already-established approach, which is to foster innovation through a non-statutory regulatory framework in order to attract AI developers. Subsequently, the government may further resist the pressure to introduce regulations.

Conclusion

The UK government’s actions imply that the UK may uphold its pro-innovation approach to AI and resist pressures to legislate on AI in order to ensure that the UK will remain one of the world’s centres for AI. The UK government must find a balance between the need to protect society and its individuals and the need to transform the AI industry to maintain the UK’s status as a global leader of AI and its position as the third largest AI market in the world. Achieving this balance may be possible through introducing protections for society and individuals for the purpose of satisfying the principles set out in the Government’s 2023 White Paper and in Section 2 of the new AI Bill, whilst also being cautious about overregulation, as this could stifle innovation and place a burden on AI developers. Without this balance, the UK remains an arena for legal uncertainty surrounding AI.

Reference(S):

1Department for Business and Trade, ‘Artificial Intelligence’ (UK Government, 17 July 2025). <https://www.business.gov.uk/campaign/grow-your-tech-business-in-the–uk/artificial-intelligence/#:~:text=The%20UK%20is%20the%20third,any%20other%20country%20in%20Europe>

2Artificial Intelligence (Regulation) Bill [HL Bill 76]

3Department for Science, Innovation and Technology and Office for Artificial Intelligence, AI Regulation: A Pro-Innovation Approach (White Paper, 2023)

4The Alan Turing Institute, Written Evidence to the Communications and Digital Committee, House of Lords, ‘Large Language models and generative AI’ Inquiry (Session 2023-2024) LLM081, 13 September 2023. <https://committees.parliament.uk/writtenevidence/124407/pdf/>

5Faveri, B., Shank, C., Whitt, R., and Dawson. P. ‘The Need for and Pathways to AI Regulatory and Technical Interoperability’ (Tech Policy Press, 16 April 2025). <https://www.techpolicy.press/the-need-for-and-pathways-to-ai-regulatory-and-technical-interoperability/>

6Statement on Inclusive and Sustainable Artificial Intelligence for People and the Planet (2025). <https://www.elysee.fr/en/emmanuel-macron/2025/02/11/statement-on-inclusive-and-sustainable-artificial-intelligence-for-people-and-the-planet>

7Kleinman, Z. and McMahon, L. ‘UK and US refuse to sign international AI declaration’ (BBC News, 11 February 2025). <https://www.bbc.co.uk/news/articles/c8edn0n58gwo>

8French National Cybersecurity Authority (ANSSI), Building Trust in AI Through a Cyber Risk-based Approach (Report, 2025).

9Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence [2024] OJ L 12.7.2024

10Samantha Subin, ‘Meta says it won’t sign Europe AI agreement, calling it an overreach that will stunt growth’ (CNBC, 18 July 2025). <https://www.cnbc.com/2025/07/18/meta-europe-ai-code.html>

11Moens, B., and Bradshaw, T. ‘European CEOs urge Brussels to halt landmark AI Act’ (Financial Times, 3rd July 2025). <https://www.ft.com/content/a825759e-aec8-4184-bc73-f604f169204c>

12Data (Use and Access) Act 2025 (c.18)

13House of Commons Library, Data (Use and Access) Bill [HL]: progress of the bill (Research Briefing).

14Getty Images (US) Inc and others v Stability AI Ltd EWHC 38 (Ch), EWCA Civ 749

15Louise Popple, ‘Getty Images v Stability AI: where are we after the trial – copyright?’ (Taylor Wessing, 9th July 2025). <https://www.taylorwessing.com/en/insights-and-events/insights/2025/07/getty-v-stability>

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top