A short while ago, on the 29th of March 2023, the UK government issued a white paper on its domestic AI regulation. The report runs through the current regulatory landscape for AI (artificial intelligence), and details the UK´s ambitious plans to enhance its regulatory environment. If brought into fruition, these plans would make the UK one of the best places in the world to develop and deploy AI.
The report leaves no room for doubt — the UK is dead-set on securing its position as a global leader in AI. It aims to create a pro-innovative regulatory framework that (hopefully) will make the UK the most attractive place in the world for AI innovation.
With these rapidly evolving laws comes a need for a clearer perspective on AI regulation in the UK. This article aims to help demystify the UK’s existing AI regulations and shed light on its future trajectory.
We trust this piece will become a useful guide for those actively engaged in the UK's AI industry, such as AI startup founders, investors, and anyone keen on understanding the regulatory risks tied to AI development and deployment in the UK. We trust that it will equip you with the fundamental knowledge needed to navigate the regulatory complexities surrounding AI in the UK.
📚 Read more: Explore the global regulatory landscape with our global AI regulations tracker
A general overview of the UK’s AI regulatory framework plans
The white paper indicates a clear intention by the UK government to create a proportionate and pro-innovation regulatory framework, focusing on the context in which AI is deployed rather than the technology itself.
At the heart of the UK's framework are five guiding principles that govern the responsible development and use of AI across all sectors of the economy. These principles are:
- Safety, security, and robustness: ensuring AI systems are reliable and secure.
- Appropriate transparency and explainability: making sure AI operations are transparent and can be easily understood by users.
- Fairness: ensuring AI does not contribute to unfair bias or discrimination.
- Accountability and governance: holding AI systems and their operators accountable for their actions.
- Contestability and redress: providing mechanisms for challenging AI decisions and seeking redress.
The UK government recognises that without regulatory oversight, AI technologies could pose many risks, such as to privacy and human dignity. As a result, it tries to design its regulatory intervention to ensure the responsible use of AI, but simultaneously takes a more careful stance not to stifle innovation. It focuses its efforts on high-risk AI systems in specific contexts, such as in medical diagnostics, critical infrastructure monitoring, or robotics.
Nevertheless, while the white paper does highlight that the focus is on high-risk AI systems, it acknowledges that lower-risk AI systems may be subject to AI-specific regulation too. This could occur depending on how and in what context the AI system is used. For example, this could happen directly, or if changes to the system or its use elevate it to a high-risk category.
📚 Discover more: how to mitigate compliance risks when using AI in HR and recruitment
What does the current UK regulatory landscape for AI look like?
The UK government admits that its current regulatory landscape for AI is a patchwork of legal requirements, standards, and guidelines that can be challenging to piece together. Still, the white paper highlights the following key regulations that provide the UK with its current framework for AI regulation:
- Data Protection Act 2018 / General Data Protection Regulation (GDPR). The Data Protection Act 2018 is the UK's implementation of the GDPR, an EU privacy regulation that has been incorporated into UK law. Both govern how personal data should be handled by businesses and organisations, ensuring that individuals' privacy rights are respected. In the context of AI, this means that any AI system processing personal data must do so in a way that respects these rights. For instance, if a company uses AI to make automated decisions about individuals, it must ensure that the decisions are fair, transparent, and can be challenged by the individuals affected.
- Human Rights Act 1998. This Act incorporates the rights set out in the European Convention on Human Rights into UK law. These rights include the right to respect for private and family life, which can be impacted by AI systems, particularly those that process personal data or make automated decisions about individuals.
- Equality Act 2010. This Act prohibits discrimination in various areas of public life, including employment and the provision of services. If an AI system is used in a way that results in discrimination, such as biassed hiring practices or unfair service provision, the organisation could be in breach of the Equality Act.
Multiple regulations may simultaneously apply to AI in certain use cases. For instance, discriminatory outcomes resulting from AI may violate the provisions of the Equality Act 2010 as well as the principle of fairness of the data protection laws.
Furthermore, product and sector-specific legislation may also apply to AI that has been integrated into some services or products. Specifically, this may include legislation for financial services, electrical and electronic equipment, medical devices, and even toys.
Also, some sector-specific regulatory bodies went as far as to proactively adapt their approaches to AI-enabled technologies. As an example, in 2022 the UK´s Medicines and Healthcare Products Regulatory Agency published a roadmap clarifying in guidance the requirements for AI and software used in medical devices.
📚 Discover more: how children’s data in AI apps can be processed compliantly
Other regulators, such as the Information Commissioner's Office which oversees the data protection matter in the UK, also have published more general Guidance on AI. This guidance is aimed to clarify AI requirements regarding fairness and the applicability of some provisions of UK´s data protection laws.
Additionally to the above, the UK government foresees that its consumer law as well as tort law will apply to AI usage. These should be relevant where consumers enter into sales contracts for AI-based products and services, ensuring that businesses are prohibited from including unfair terms in consumer contracts.
📚 Read more: Discover when you may need a DPO or an AI Ethics Officer to stay compliant with AI-regulations
The future of the UK’s AI regulation
Despite existing safeguards, the UK recognizes that some AI risks still arise across, or in the gaps between, existing regulatory remits. To mitigate this, the UK government is trying to introduce a more streamlined regulatory landscape. For achieving this, it’s utilising a principle based approach, trying to base its AI regulation on the following 4 main pillars:
- Defining AI. The UK is currently working on a clear definition of AI to help regulators and give clarity to those creating AI technologies.
- Context-Specific Approach. The UK understands that AI can have different impacts depending on how it's used. So, they're planning to regulate AI based on its specific context such as; the case of self-driving vehicles, or foundation models and LLMs. The UK is also launching a Foundation Model Taskforce to help build capability in this area.
- Cross-Sectoral Principles. The UK is also creating a set of guiding rules for regulators to follow when dealing with AI. These rules will help ensure good practices across all stages of AI development and use.
- Central Functions. The UK is planning to set up a central body to support the AI regulatory framework. These will include monitoring and evaluation to make sure the framework is working well and can adapt to changes in AI technology.
The UK also sees the value in creating safe spaces where AI innovations can be tested without the usual regulatory constraints. These “sandboxes” or “testbeds” would allow innovators to experiment in a controlled environment, and help the regulators identify potential AI-related risks before full-scale deployment. The UK is currently exploring different ways to implement these sandboxes to ensure they effectively support innovation and regulatory understanding.
Beyond that, the UK government is also focusing on the international alignment of AI regulations to support UK businesses in global markets and protect UK citizens from cross-border harms. In this context, the UK tries to make sure that the territorial application of its AI laws will remain similar to its existing framework of key laws, such as the Data Protection Act 2018 and the Equality Act 2010.
💡 Worth checking: explore global AI frameworks and learn about AI risk assessments for businesses
Contributing to the UK´s AI regulatory strategy, other planned measures can also be highlighted, such as:
- Conducting awareness campaigns aimed at educating consumers and users about AI regulation and the associated risks.
- Establishing frameworks to facilitate the assessment of risks related to AI by business and regulatory bodies.
- Strengthening the capabilities of regulators to effectively oversee and enforce AI regulations.
How will the EU AI Act influence the UK companies working with AI?
Interestingly, though the white paper mentions measures to strengthen the UK´s enforcement framework, it does not specifically address any changes in the enforcement actions. As such, it seems we should not be expecting more rigorous sanctions for non-compliance, in contrast to the regulatory framework of the EU’s AI Act, which establishes fines of up to 7% of the global annual turnover.
Key take-aways on the UK’s AI regulation plans
To sum up, the regulatory approach of the UK regarding AI focuses on regulating specific use cases rather than the technology itself. Consequently, sector-specific rules will dominate the regulatory landscape, though universal cross-sectoral principles will provide a framework for those rules.
Currently, certain sectors in the UK have already implemented AI governance principles and provided guidance on AI requirements. However, sectoral regulation for AI is still in its early stages as the UK is in the process of developing a framework that strikes a balance between an innovation-supporting approach and the protection of user and consumer interests and rights.
Until the envisioned sector-based approach is fully implemented, AI continues to be governed primarily by human rights and anti-discrimination laws, a few sectoral regulations, and the UK's data protection framework. To learn more about the compliance of your AI application with the UK data protection laws, check out our article on how to incorporate privacy in AI product design.
There will not be a “one size fits all” compliance solution for business, and every AI use case will have to be evaluated through the prism of the specific industry where it exists. While these sector-specific rules are still mostly in development, basic compliance with data protection, human rights, and consumer safety laws will help to ensure your current compliance, and lay a robust groundwork for future regulations.
Comply with UK AI regulation with help from Legal Nodes
Kickstarting your compliance with AI UK regulations may seem like a daunting and long journey, but with Legal Nodes you can get the support you need when you need it most. With the upcoming global regulatory focus on AI, many organisations should start considering taking steps to mitigate the legal risks associated with their development or use of AI. If your business falls into this category, the experts at Legal Nodes can help.
Make your AI application compliant
Kostiantyn holds a certification as an Information Privacy Professional in Europe (CIPP/E). Fuelled by his passion for law and technology, he is committed to the protection of fundamental human rights, including data protection. He adds a musical touch to his repertoire as an enthusiastic jazz pianist in his spare time.