In the vast, ever-evolving landscape of business, Artificial Intelligence (AI) has emerged as a game-changer technology. From automating mundane problems like sorting emails to more complex tasks like managing supply chains, AI is helping businesses increase their efficiency and reduce costs. However, as the uses of AI increase, so do the risks to society. Consequently, regulators are working hard to introduce effective frameworks.
The visibility of AI has skyrocketed recently, thanks to the release of general-purpose AI-based services from tech giants like OpenAI, Google, and Meta. Many businesses started to employ these AI-based services to deliver customized suggestions, virtual assistance, or customer support, personalizing content and enhancing user experiences. A recent study has seen an average increase in customer service workers' productivity of 14% with the use of AI tools.
In addition to general-purpose AI, specialized AI models designed for particular sectors and industries are driving transformative changes across whole domains. For example, in healthcare, AI is transforming the landscape with its capabilities in disease forecasting, deciphering medical imagery, and tailoring treatment strategies.
The financial industry also leverages AI to facilitate KYC processes, oversee transactions, manage risks, and deliver analytics. Furthermore, the security sector is utilizing AI's facial recognition capabilities to enhance its surveillance and access control systems.
📚 Read more: What does the global AI regulatory landscape look like?
But what are the key risks of AI?
With all the benefits that they bring to these industries and our society, by their very nature, AI bears specific risks.
The key dangers that AI developers and deployers should be wary of are:
- Biases in models that can lead to unfair outcomes.
- Lack of transparency of the AI model's decision-making, often referred to as the “black box” problem. This can make it difficult to understand how an AI model arrived at a particular decision.
- Over-reliance on AI that can lead to complacency and a lack of human oversight.
- The potential for misuse, particularly in the hands of malicious actors.
Given these inherent risks that AI bears and the regulatory landscape that applies to AI applications, how can developers of these apps make sure they operate in a compliant way?
This is what we're going to discuss in this article and focus on these questions:
- What existing (GDPR) and upcoming (EU AI Act) regulations are going to apply to an AI application?
- How can developers and deployers of AI applications ensure their privacy compliance?
- What additional compliance requirements should you consider when integrating someone else´s AI models?
Regulators are after AI, aren't they?
It's no surprise that regulators worldwide are paying close attention to AI now. Many jurisdictions are in the early stages of adapting their legislation to safeguard against the potential pitfalls of AI. For others, AI legislation is on the horizon. Take a look at some of the emerging AI frameworks around the world.
For instance, the comprehensive AI Act of the EU received a favorable vote from the EU Parliament on March 13, 2024. Following a few final steps, the Act is expected to come into force between April and June 2024, with a summer 2026 deadline set for all businesses to be fully compliant.
📚 How will the EU AI Act impact your business? Find out with our Free EU AI Act Self-Assessment Tool.
In the United States, significant strides are also being made in regulating AI and promoting responsible AI use. The Biden-Harris Administration has recently announced new initiatives to advance responsible AI research, development, and deployment. These efforts aim to manage the risks of AI while promoting responsible AI innovation and protecting the rights and safety of individuals.
Furthermore, on the global stage, the G7 leaders recently emphasized their commitment to AI standards. They stressed the importance of inclusive artificial intelligence governance and interoperability to achieve their shared vision and goal of trustworthy AI, in line with their shared democratic values.
Is my AI application free from compliance requirements until AI-specific laws come into force?
Not quite. As we anticipate more comprehensive AI regulations worldwide, other legal areas are already being applied to mitigate the risks associated with AI technology. Specifically, data protection law has proven to offer a strong legal framework against unjust AI and practices. Data protection laws are fundamentally based on principles like fairness, transparency, and accountability, which generally overlap with the principles of ethical AI. As such, these principles will remain crucial even after the enactment and implementation of AI-focused laws.
Regardless of your plans to comply with the future laws on AI, if you envision that users may share personal data with your AI application or you plan to use their data for AI training, compliance with data protection laws is inevitable.
📚 Read more: do you need a DPO or an AI Ethics Officer?
Does the GDPR apply to AI?
The General Data Protection Regulation (GDPR) is one of the cornerstone data protection laws globally. It's crucial to adhere to it if you process personal data and operate in the EU / UK, or target these markets from abroad. Given its comprehensive nature, complying with the GDPR ensures that most requirements applicable to AI apps from other data protection laws are met. Therefore, even if the GDPR doesn't directly apply to your situation, considering GDPR compliance could be beneficial if you plan to rely on AI in your operations.
👉 Protect your business with a GDPR Package
What are the GDPR compliance requirements for your AI use case?
To get the basics right on GDPR compliance in the context of AI, here’s some areas you might want to focus on:
- Choosing Appropriate Lawful Basis. The GDPR requires you to establish a legal basis for processing personal data. This is particularly crucial when you're developing your own AI model and dealing with training data. The most suitable legal grounds for businesses often include consent (explicit consent for sensitive data), legitimate interest, or the performance of a contract. It's essential to determine the appropriate legal basis before you commence any data processing activities.
- Transparency of AI Algorithms. You must provide clear and transparent information to users about how their data is processed via your AI application, specifically if you collect their information for AI training purposes. Keep in mind that transparency extends beyond merely providing information, and measures will have to be taken to ensure that individuals fully comprehend the implications of their interaction with your AI application.
- Compliance With Data Subject Rights. The GDPR provides several rights to individuals, such as the right to access their data, the right to rectify inaccurate data, the right to erasure, or the right to restrict processing, etc. You must have processes in place to comply with these rights, and ensure that your response procedures are in line with the GDPR requirements.
- Automated Decision-Making And Profiling. AI may make decisions without human intervention, which can significantly impact users. This includes serious applications like credit scoring as well as simpler scenarios like ranking players in a computer game.
The GDPR grants individuals the right not to be subject to solely automated decision-making that has legal or similarly significant effects on them, necessitating the integration of appropriate compliance strategies. - Preventing Minor Access. To prevent unauthorized access by minors, you might need to implement age verification measures and obtain parental consent. While age restrictions may vary across states, opting for 16 years as a standard will be the safest option to take.
- Data Protection By Design And By Default. This notion of GDPR emphasizes the importance of embedding privacy and data protection safeguards into the initial design of AI applications. This approach is crucial in ensuring that user privacy is respected from the onset.
In addition to the above, compliance with the data protection principles would also be crucial when developing and deploying your AI application. As mentioned earlier, many principles of the GDPR align with the ethical principles of AI. In particular, the principle of fairness would be especially significant to consider. Adherence to other GDPR principles such as purpose limitation, data minimisation, storage limitation, and security could also greatly contribute to the fairness of your AI application as well as mitigate a variety of other potential risks.
📚 Discover more: how children’s data in AI apps can be processed compliantly
Taking these principles into account will help you to ensure the responsible use of AI, making sure it doesn't perpetuate bias or discrimination and that its impact on your users and society is carefully considered. In this context, a Data Protection Impact Assessment (DPIA) becomes a vital tool.
By conducting a DPIA, you will be able to identify, document, and prepare to mitigate potential harm to your users, ensuring that your AI application aligns with both data protection principles and general ethical considerations of AI. You might also consider utilizing the AI and Data Protection Risk Toolkit provided by the UK's data protection authority, the ICO. Though optional and not required by law, this toolkit was designed to assist in mapping out potential risks and complement (not replace) the DPIA, reinforcing your risk assessment process.
Keep in mind that the various GDPR obligations are interconnected. For instance, when you fulfill the requirements of a lawful basis, such as assessing the necessity and proportionality of your processing, it can help reduce the likelihood of unfair processing and align with the principle of fairness. Therefore, it's essential to adopt a comprehensive approach, seeing the bigger picture of how these obligations interrelate.
📚 Discover more: how to mitigate compliance risks when using AI in HR and recruitment
What are the compliance requirements to using someone else's AI model?
Other compliance requirements may also apply if you decide to integrate a third-party AI application into your products rather than developing and deploying your own. In a case like this, you need to carefully evaluate whether the third-party provider will have access to the collected data. If they do, it may be necessary to ensure their compliance with GDPR requirements and take appropriate measures to safeguard the data transfer.
For example, if you're planning to integrate OpenAI's ChatGPT API into your product, you will need to consider additional GDPR obligations. Similar services, such as ChatSonic or Jasper AI APIs, will generally require the same considerations.
Moreover, if you choose to host and operate an existing AI model on your own servers, and the processed data isn't shared externally, you still have the responsibility to ensure its safe operation. A good example of this kind of model would be Dolly 2.0, which is a recently released open-source and customizable AI model from Databricks that can be integrated into your own products and services.
Even where the model provider doesn't have access to the data, understanding their approach to AI can provide assurance that the model itself doesn't cause harm to your users. For instance, it's important to understand the model's design and training process, including the measures taken to prevent inherent biases. Having a clear understanding of the model's specifications is crucial when assessing the risks associated with your AI application, for instance, during a DPIA preparation.
💡 Worth checking: how to navigate the UK’s AI regulations
Preparing for GDPR compliance of your AI application
With AI technology touting an increasing number of benefits, we understand that the more complex aspect of AI and data privacy can present as less interesting and far more challenging. That’s why Legal Nodes provides data protection and privacy support that can help you get the most out of emerging AI technology, without placing your business at risk. By maintaining regulatory compliance of your AI app, you’ll be able to confidently grow your business, knowing your AI data protection needs are met.
Our team holds a robust understanding of both privacy requirements and evolving nature of businesses, meaning we speak your language and bring a business-oriented approach to all our solutions. From mitigating privacy and security risks, to protecting your users and your data, to making the most of your AI application, we can help with it all. We support our clients by monitoring compliance, conducting DPIAs, updating policies and interfaces to keep compliant with the GDPR, and much more.
Start by booking a consultation with one of our privacy professionals to talk about the key privacy concerns and challenges you have.
Learn how to make your AI application compliant
Kostiantyn holds a certification as an Information Privacy Professional in Europe (CIPP/E). Fuelled by his passion for law and technology, he is committed to the protection of fundamental human rights, including data protection. He adds a musical touch to his repertoire as an enthusiastic jazz pianist in his spare time.