February 23, 2024

Using AI in HR and Recruitment: Key Compliance Risks and How to Mitigate Them

TABLE OF CONTENTS

In HR and recruitment, AI is increasingly taking on administrative duties. A recent study of 250 HR managers has shown that 78% of them already use AI to some extent for employee records keeping, 77% for payroll processing and benefits administration, 73% for recruitment and hiring, and between 72-64% use AI tools for processes like performance management, onboarding, retaining employees, employee education, and managing talent mobility. 

At the same time, regulators across the world actively create rules designed to oversee these activities. The reasons for this vary, and they primarily revolve around safeguarding candidates and employees against risks associated with automated processes and lack of human involvement. Regulators aim to prevent discrimination, ensure lawful and transparent data handling, and provide recourse mechanisms for disputes if something goes wrong.

Navigating these regulations can be challenging, particularly for businesses operating on a global scale. Recognizing this, the team at Legal Nodes have written this article to demystify the key regulatory frameworks, highlight the widespread concerns regulators have regarding AI in HR and recruitment, and outline basic strategies to mitigate them.

This guide should be helpful for recruiters, HR managers, and any specialists using and implementing AI systems in the workplace for HR purposes.

📚 Read more: What does the global AI regulatory landscape look like?

Which regulations apply when using AI in HR?

AI's use in HR and recruitment is governed by a diverse set of regulations spanning multiple jurisdictions. At the fundamental level, there are four key domains of law that are particularly relevant when deploying AI in HR and recruitment:

  1. Data Protection Regulations. This will include comprehensive and far-reaching regulations such as the General Data Protection Regulation (GDPR) in the EU and the UK, as well as US privacy regulations on federal and state level, i.e. California Consumer Privacy Act (CCPA).
  1. Human Rights and Anti-Discrimination Legislation. These may include Human Rights Act 1998 and Equality Act 2010 in the UK, Title VII of the Civil Rights Act of 1964 in the US, or the Equality Framework Directive 2000/78/EC in the EU.
  1. Upcoming AI-Centric Regulations. These are important to consider as they will soon come into effect and introduce new compliance requirements for AI, including in HR and recruitment, often applying extraterritorially. Examples of such laws include the AI Act in the EU, or the US´s AI Bill of Rights, and the Algorithmic Accountability Act.
  1. Sector-Specific AI Laws in HR Context. A prime example is the recently enacted New York City’s Local Law 144, which mandates annual audits of recruitment technology for employers using AI in their hiring processes in the state. Another instance of these kinds of laws is Illinois's Artificial Intelligence Video Interview Act, which places strict obligations on employers that use AI-enabled analytics for interview videos.

Not clear on how the EU’s upcoming AI Act might impact businesses? Find answers in our recent article: the EU AI Act overview.

Additionally, many counties are issuing guidelines on the use of AI in various sectors. For instance, earlier in 2023 the UK released Guidance on explaining decisions made with AI. Although not explicitly designed for employment contexts, the guidelines can be very useful for HR teams and recruiters contemplating the use of AI, as they often include HR examples to illustrate the points.

Other jurisdictions are releasing similar guidelines to help those implementing AI, often specifically focusing on HR and employment context. For example, in 2022 the US launched the Artificial Intelligence and Algorithmic Fairness Initiative to ensure that the use of software, including AI-based, complies with the federal civil rights laws when used in hiring and other employment decisions.

Common regulatory concerns

The use of AI in HR and recruitment comes with its share of risks. To counter these, regulatory bodies have put forth specific frameworks and proposed various mitigation strategies. For a broad understanding, we've assembled the following list of basic areas of concern that one should take into account.

  • Ensuring you lawfully collect and process personal data of your employees and candidates. Under most data protection regulations such as the GDPR, you must be very careful when finding a legal ground for the processing of their personal data via AI. Consent, a commonly used lawful basis, may not be applicable in most cases, and other lawful bases such as legitimate interests will have to be considered. This, in turn, triggers additional obligations like a documented assessment of your legitimate interests.
  • Maintaining transparency about the use of AI and data processing. Properly informing individuals about how and why you decide to use AI in your operations is required by almost every data protection regulation. In addition, many sector-specific regulations such as the earlier mentioned New York City’s Local Law 144 also provide for similar obligations.
  • Fairness and protection against bias and discrimination. Unintentional discrimination can pose a significant problem when implementing automation in processes such as job candidate screening or work efficiency evaluation. Regulations like the GDPR recognize this risk and provide guidance and measures to mitigate any potential bias and discrimination, ensuring fairness in automated decision-making.
  • Human intervention in automated decision-making. Many data protection and anti-discrimination laws require organizations to implement measures to allow workers and job candidates to obtain human intervention, contest an automated decision, or at least express their point of view.

Moreover, when personal data is processed through AI, organizations must ensure that individuals can exercise their rights under applicable data protection legislation such as the CCPA and the GDPR. Although this can sometimes present technical challenges (as in the case of the right to rectification), organizations are obligated to implement suitable measures to comply with this regulatory requirement.

Remedies and compliance measures

So what are the exact steps you can take to make the use of an AI system for HR and recruitment more compliant and mitigate these main concerns? There are 3 main steps:

  • Audits
  • Independent oversight and advice
  • Correction measures

We’ll explore each of these in turn below.

Audits

To address the concerns outlined above, organizations would have to consider many regulatory regimes and a variety of technical and organizational measures. Yet, the foundation of this process should be assessments and audits of their AI systems. These audits, mandated by the majority of the legal frameworks discussed earlier in this article, should aim to identify concerns associated with the AI system, as well as to strategize potential mitigation measures.

One key measure is the Data Protection Impact Assessment (DPIA) required under the GDPR. A well-executed DPIA is key for making sure your AI system is safe, and it will give you a plan of issues and possible fixes for your AI use case.

Furthermore, certain jurisdictions like the UK develop AI-specific audit frameworks aimed at promoting responsible AI use and preventing discriminatory HR and recruitment practices. For instance, the UK recently introduced a draft AI auditing framework, which emphasizes a risk-based approach and underscores the importance of transparency, fairness, and accountability in AI systems. This framework also provides a roadmap for organizations to ensure that their AI deployments are not only compliant with regulations but also ethically sound.

Future legislation, like the proposed AI Act, is also expected to introduce audit requirements for AI systems, including those in recruitment.

Try Legal Nodes’s free AI Act Self-Assessment Tool to see if your use of AI for HR purposes might be affected by the EU’s new AI legislation.

Independent oversight and advice

When carrying out audits and monitoring your HR and recruitment practices involving AI, independent oversight and expert advice will also be important. To facilitate this, regulations such as the GDPR explicitly require the appointment of a Data Protection Officer (DPO). The involvement of a DPO will help your organization to supervise the responsible use of AI and make sure that audits are conducted correctly and independently.

The role of a DPO is in many ways similar to that of an AI Ethics Officer, as both positions involve overseeing fair and transparent deployment of AI. However, while a DPO focuses on data privacy and ensuring adherence to regulations, the AI Ethics Officer's role focuses more on addressing ethical aspects of AI use.

For effective compliance, it's beneficial to have someone with a deep understanding of AI who not only helps you navigate current rules but can also anticipate future ethical (and legal) challenges. A DPO with a strong grasp of AI risks and potential issues can provide invaluable guidance on deploying this technology ethically and in compliance with regulations.

Legal Nodes offers a Virtual DPO subscription service providing Data Protection Officer support starting from 199 USD / month.

Every DPO in the Legal Nodes Network holds a CIPP/E certification, and many  have already helped a variety of AI-driven startups and projects with privacy and data protection matters.

Correction measures

Once the risk assessment is complete, and you've received insights from a compliance expert such as your DPO, it's important to address and mitigate any identified issues to ensure you're on the right side of the law.

  • Getting the lawful basis right. A good first step to approach this would be to conduct a comprehensive review of your data processing activities and clearly identify the purpose for them. Keeping your records updated and regularly consulting with your DPO will help you discover the most appropriate lawful basis for your AI application case and build a solid foundation for all of your subsequent operations.

  • Providing proper transparency. This can be achieved by clearly explaining how your AI will be used and meeting all information requirements set by various legislation. At a fundamental level, inform your employees or job applicants about your AI usage, its purposes, and their privacy rights, such as access and deletion.

  • Ensuring fairness and protection against bias and discrimination. Here, you could consider opting for a thoroughly tested and verified AI solution. If you're developing your own AI, carefully mapping out and implementing specific technical measures will be important instead.

  • Allowing for human intervention in automated decision-making. You can also implement certain procedures to allow for human involvement in automated decision-making and ensure a balance between technology and human judgment. Selecting suitable measures, documenting your processes, and training staff would be the most fundamental compliance steps at this stage.

It's important to note that there isn't a one-size-fits-all solution. Each case must be individually assessed based on the specifics of an organization's HR and recruitment operations.

General compliance measures of using AI in HR checklist

Take steps now to compliantly adopt AI into your HR

As AI gains traction in HR and recruitment, deciphering its regulations can pose challenges. Regulators have introduced comprehensive frameworks and practical solutions for the ethical use of AI in HR, but putting these into practice can present its own set of hurdles.

At a foundational level, organizations should think about conducting audits, obtaining expert compliance advice, and applying corrective actions tailored to their specific needs.

Legal Nodes is here to guide you through this. Our team, with diverse expertise in various AI applications including HR, ensures a seamless compliance journey that will take into account all your business priorities.

Kickstart your journey with a free 30-minute consultation. Our privacy specialists will help you navigate the compliance landscape tailored to your situation. Schedule your session using the button below.

Disclaimer: the information in this article is provided for informational purposes only. You should not construe any such information as legal, tax, investment, trading, financial, or other advice.

Figure out the compliance of your AI in HR use case

Book a call

Kostiantyn holds a certification as an Information Privacy Professional in Europe (CIPP/E). Fuelled by his passion for law and technology, he is committed to the protection of fundamental human rights, including data protection. He adds a musical touch to his repertoire as an enthusiastic jazz pianist in his spare time.

Explore popular resources