Why are AI risk assessments important?
While AI brings numerous benefits to various industries and society, it inherently carries specific risks. These include:
- biases in models leading to unfair results
- the “black box” problem, which creates a lack of transparency in decision-making processes
- potential over-reliance causing complacency and reduced human oversight
- possibilities of misuse
Considering the inherent risks of AI, and the vast and varied regulatory environment, it’s vital for businesses to undertake ‘AI risk assessments’. These assessments include an analysis of existing (and sometimes proposed) frameworks. Frameworks are not strictly regulations; instead they are guidelines that are essential for managing AI’s unique risks in business applications. Ultimately, it is important for businesses to respect all applicable regulatory frameworks, as they are designed to ensure that AI is used in business operations in a responsible and compliant manner.
Below, we’ve outlined a selection of established and emerging legal frameworks for AI risk assessment. These are crucial for managing AI risks and ensuring compliance, both in specific jurisdictions, and in some instances, in the global sphere. In a separate article, we've mapped key changes across the global regulatory landscape with our global AI regulations tracker.
Examples of AI use in businesses
Businesses handling children's data will need to follow frameworks so that they process children's data in their AI apps compliantly. Companies can also explore ways to incorporate privacy protection into their AI product design. As more organisations start using ChatGPT in their business processes, they'll need to address ChatGPT privacy risks and ensure GDPR compliance with OpenAI’s API. If using AI to recruit new staff or manage human resources, businesses should also be thinking about how to mitigate compliance risks when using AI in HR.
EU GDPR and Data Protection Impact Assessments
GDPR was enacted to regulate and safeguard the processing of personal data on an EU level. Though it regulates through a number of principles relating to data processing, it also contains specific rules and provisions that are pertinent to the deployment and operation of AI.
One such provision is the requirement to conduct a Data Protection Impact Assessment (DPIA) for processing operations that are likely to result in a high risk to the rights and freedoms of natural persons. A DPIA is a step-by-step process designed to:
- examine how personal data is handled
- determine whether the way it’s being processed is safe and balanced
- identify and address any risks to people's rights and freedoms
Understandably, consideration of the DPIA is crucial when it comes to AI, especially where automated decision-making and profiling are concerned, as they can have significant impacts on individuals’ rights and freedoms.
In essence, the DPIA is the most important global legal requirement organizations will have to consider to ensure that organizations deploying AI do so responsibly, transparently, and ethically.
👉 Protect your business with a GDPR Package
National initiatives for AI assessments
Several countries are additionally developing their own legal frameworks for assessing AI risks. Some are adapting existing regulatory structures like data protection laws, while others are creating new standards for AI risk assessment from the ground up.
These include both broad frameworks that apply to all AI and those that are specific to certain industries. Here’s a brief overview of several of the most important frameworks being developed.
USA
At the national level, the U.S. has advanced in developing its AI strategy, marked by the introduction of the Artificial Intelligence Risk Management Framework 1.0, which was introduced by the National Institute of Standards and Technology (NIST) in early 2023. Mandated by the US National AI Act of 2020, this framework is currently voluntary.
NIST’s framework primarily focuses on bolstering the trustworthiness of AI systems, reducing biases, and safeguarding individual privacy. To aid in the implementation of the framework, NIST has introduced the AI RMF Playbook, a guide expected to be updated semiannually, offering evolving best practices.
Additionally, regulations for AI assessment are also being formulated at the state level in the U.S. For instance, New York City’s Local Law 144 requires annual audits of recruitment technology for employers utilizing AI in hiring processes within the state.
France
In France, the development of an AI risk assessment framework is underway with the introduction of an AI-risk self-assessment tool by the National Commission on Informatics and Liberty (CNIL).
This tool, grounded in the GDPR compliance framework, is crafted to assist organizations in assessing how well their AI projects align with data protection principles and in identifying areas requiring improvement.
📚 Read more: Discover when you may need a DPO or an AI Ethics Officer to stay compliant with AI-regulations
UK
In the UK, the government's recent AI Regulation White Paper highlights the creation of a framework for assessing AI risks. It promotes an adaptable and innovation-friendly approach to AI regulation and introduces a Portfolio of AI Assurance Techniques.
This portfolio is essential for anyone involved in creating and using AI systems. It shows practical examples of different AI assurance techniques that follow the government’s guidelines, aiming to encourage the development of responsible and trustworthy AI.
Check out our dedicated article for further help with navigating the UK's AI regulations.
Singapore
Singapore has been proactive in developing a comprehensive AI risk assessment framework, emphasizing ethical use, governance, and human-centric approaches to AI. The Personal Data Protection Commission of Singapore (PDPC) introduced the Model AI Governance Framework, which is a key component of Singapore’s national AI strategy.
The Model AI Governance Framework is voluntary and provides detailed and readily implementable guidance to private sector organizations. It is designed to help organizations implement responsible AI and data governance practices and aligns with international norms and ethical standards.
Proposed global legal frameworks for AI risk assessments
EU's AI Act Framework
In addition to the GDPR, the EU is advancing its legal frameworks for AI through the introduction of the AI Act. To put it briefly, the AI Act is a set of rules created to assist developers and deployers of AI systems in ensuring that they are free from any AI-related risks.
On March 13, 2024, the European Parliament voted in favor of the Act, which is now expected to come into force between April and June 2024. Following a 2-year grace period, the Act will be fully applicable, and businesses will be expected to fully comply with the new laws.
In light of the recent changes regarding this legislation, we’ve updated our guide on the EU AI Act. Learn more about the new rules and the anticipated timeline of the AI Act’s enforcement.
The AI Act will require detailed assessments of “high-risk” AI systems that could significantly affect people and society. These assessments would be required to probe the transparency, robustness, and accuracy of the systems, affirming they do not cause any harm.
Furthermore, the AI Act will also introduce specific provisions for evaluating AI foundational models. In particular, it will require a thorough review of how these models are created, and how they can be applied, concentrating on their possible effect.
Other stringent requirements on AI systems will also apply under the AI Act. If you are interested in understanding how this future European law will apply to your AI system, we invite you to try out our Free EU AI Act Self-Assessment Tool.
OECD Framework
In 2022, the Organization for Economic and Cooperation and Development (OECD) introduced an AI Risk Evaluation Framework, centering around factors such as the deployment context, intended use, sector, data governance, algorithm type, and system outputs.
While the OECD framework is primarily designed to guide policymakers in developing national frameworks, it can also serve as a valuable resource for businesses. It essentially provides a foundational vision of AI risk classification, highlighting crucial factors and suggesting mitigation strategies.
Considering it is designed to serve as a base for the legal frameworks for AI assessment globally, it can also enable you to stay proactive and be a step ahead of regulators.
UNESCO Framework
UNESCO unveiled its Recommendations on the Ethics of Artificial Intelligence in late 2021. Similar to the OECD framework, these Recommendations advocate for values and principles of responsible AI at the inter-governmental level and offer policy guidelines to UN members for adopting an Ethical AI Risk Evaluation framework.
Acknowledging and integrating this framework into your company’s AI-based operations can notably enhance your prospects for global AI compliance. Similarly to the OECD framework, it also would position you to stay in advance by adopting responsible AI procedures and addressing AI-related risks proactively.
Putting AI risk assessments into practice
AI risk assessments are being adopted widely. However, the availability of practical and clear tools for evaluating AI systems to stay compliant is limited, and there is no universally recognized approach.
Recognized standards for risk assessment such as ISO 31000 and IEEE 7010-2020 are available, but they are not obligatory and don’t fully address the compliance-related risks, especially from a legal standpoint, which are currently under global regulatory scrutiny. The upcoming EU AI Act is expected to offer some direction, but there are uncertainties about its practical application in real-world scenarios.
In our practice, Data Protection Impact Assessments (DPIA) under the GDPR framework have proved to be quite an effective tool to map the data protection risks in AI applications. At Legal Nodes, we offer DPIAs to companies developing AI tools or who want to integrate an AI tool into their system and want to make it in a compliant way.
Making a DPIA with Legal Nodes can help you to:
- Map how the data is handled within the AI system, understand the lawful basis for processing, and assess whether there are correct data handling measures in place
- Evaluate your AI vendor by making a background check, analyzing terms of their service, identifying their level of compliance with GDPR principles and requirements
- Evaluate the potential data protection risks of an AI solution and identify key areas for improvement
As well as performing a DPIA, our privacy specialists can also help you to evaluate how the other AI risk assessment frameworks mentioned in this article might apply to your business.
Kickstart your AI Risk Assessment
Kostiantyn holds a certification as an Information Privacy Professional in Europe (CIPP/E). Fuelled by his passion for law and technology, he is committed to the protection of fundamental human rights, including data protection. He adds a musical touch to his repertoire as an enthusiastic jazz pianist in his spare time.