May 7, 2024

How to Process Children’s Data in AI Apps in a Compliant Way

TABLE OF CONTENTS

Children are a growing target audience for many companies in the business of developing and marketing apps on electronic devices. With AI on the rise, what steps must app developers and businesses take to protect the privacy rights of the children using their apps and protect their interests by remaining compliant with privacy laws?

If you process children’s data, you need to offer adequate protection. California Attorney General Rob Bonta aptly noted that “… we should be able to protect our children as they use the internet. Big businesses have no right to our children’s data: childhood experiences are not for sale.” This and other related AI issues will be discussed below in this article.

This article is brought to you by the Legal Nodes privacy team. Legal Nodes is a legal platform for tech companies operating globally. We help startups establish and maintain legal structures in 20+ countries, including assisting with their privacy compliance obligations across the globe.

Please note: none of this information should be considered as legal, tax, or investment advice. Whilst we’ve done our best to make sure this information is accurate at the time of publishing, laws and practices may change.

This article serves as a practical compliance guidance for operators developing AI apps for children and for those developing apps that children may become users of. It may also help any company processing children’s data in AI apps. Practical tips provided in the article may be useful and applicable to any other apps aimed at children – not only strictly AI apps. 

In this article, you will find an analysis of the most popular jurisdictions on this topic: the EU/EEA countries, the UK, and the USA.

📚 Read more: Explore the global AI regulatory landscape with our global AI regulations tracker

Protecting children in the age of AI apps

In the context of rights to privacy, children are a vulnerable category of data subjects. Under GDPR law, children are afforded some protection, both via the GDPR binding articles and via GDPR recitals, which help to explain the application of the law and add depth to it. Recital 38 to the GDPR says: “Children merit specific protection with regard to their personal data, as they may be less aware of the risks, consequences, and safeguards concerned and their rights in relation to the processing…”. GDPR outlines quite clearly that no one is permitted to misuse or take advantage of children’s vulnerability in any way.

Many children have access to an electronic device such as a tablet or smartphone and many will access various applications or ‘apps’ via their device. The explosion of AI features in these apps, along with brand new AI apps on the market, only worsens an already difficult situation with regard to the vulnerability of children. Many adults fail to fully understand all the risks associated with their children’s electronic devices, let alone the AI found in the apps themselves. If this is the reality for most parents, what on earth does this mean for most children?

From a developer standpoint, if you consider the best interests of the children who are users of your app, then this should translate into all aspects of the design of your AI app. If their interests are properly respected, then you should comply with the applicable ‘lawfulness, fairness and transparency’ principle and take Recital 38 to the GDPR fully into account. The principle of ‘the best interests of the child’ is therefore something that you specifically need to consider when designing your AI app.

Last but not least, AI apps target global markets that contain a mix of children and adult audiences. Even if you do not target children specifically, or if you believe there are no minors using your AI app, you may be wrong. The very nature of access that people have to these apps (anyone can access an app store) means that you cannot always be sure of who is using your AI app on the other end of the internet. You must provide adequate protection to your underage users, otherwise, you might be in breach of one or several laws. Don’t forget – AI apps target global markets, meaning you’re exposed to a plethora of regulations.

Privacy regulation in the UK and Europe

There are three key pieces of regulation to explore in the context of Europe and the UK. They are:

In Europe, the GDPR mandates obtaining parental consent for apps collecting personal data, especially from children under 13. Consent age varies between EU/EEA member states and can range from 13 to 16 years old. 

When processing children’s data, you must either obtain verifiable consent from a parent or guardian, or assess a child’s competence for alternative lawful reasons for processing. You must make reasonable efforts, taking into account available technology, to verify that the person giving consent actually holds parental responsibility for the child. Furthermore, if targeting children over 13, you must provide clear and age-appropriate privacy notice for them.

The right to erasure is important, particularly for data collected when a person gave consent when being a child. When processing the data of children, you must take appropriate measures to ensure that their data is safeguarded.

AI app developers must offer clear terms and privacy notices explaining what personal information is collected, how it is used, and who it is shared with. The privacy notice must also provide instructions for parents to review and delete their child’s personal information upon request.

📚 Read more: Learn how to incorporate privacy protection into AI product design

AI app developers must ensure that their apps are designed with children’s privacy and safety in mind. They should prioritize children’s privacy and safety by incorporating age-appropriate language, icons, and images to explain their data collection and usage, as well as offer parental control features such as limiting access to specific features and setting time limits.

👉 Get an EU-US DPF self-certification for your business

Children’s online privacy protection in the USA

The USA has its own set of principles concerning privacy rights for children online, called COPPA: Children’s Online Privacy Protection Rule. In the United States, apps collecting data from users, including children under 13, must adhere to the Children’s Online Privacy Protection Act (COPPA). Under COPPA, operators of AI apps targeting or knowingly collecting personal information from children under 13 must obtain verifiable parental consent. 

A key requirement is having a COPPA-compliant privacy notice, which should include:

  • Description of the types of personal data collected from children, the purpose, and the way it is handled.
  • Listing of third-party operators (e.g. social plugins, widgets, ad networks).
  • Explanation of parental rights and procedures to follow.

Under COPPA, AI app developers must secure parental consent before collecting a child’s personal information. Privacy notices must be clear and concise, specifying data collection, use, sharing, and instructions for parental data review and deletion. AI app developers must also implement data protection measures, including encryption and restricted access to authorized personnel only.

How can I determine if my AI app falls under these laws?

  • The above laws apply to your AI app if you target users in the region where the laws are from, regardless of the physical location of your business or servers there. 
  • If you have an establishment in the region, such as a physical office there, or your company is registered according to the laws of that country but your team works remotely from all around the world, then you must respect the age limit set by the local/applicable law(s) when you process personal data of the children based there.
  • If you lack an establishment but actively pursue users in that region, follow the age limits.
  • If neither applies, you are not bound by the region’s laws.

📚 Read more: Discover how a DPO and an AI Ethics Officer can help your business stay compliant with AI-regulations

When is an AI app offered directly to a child?

An AI app is considered as being offered directly to a child when:

  • The AI app explicitly targets children of any age; or
  • The AI app is available to everyone without age restrictions or with age restrictions under 18; or
  • Despite an age limit claim, evidence such as site content, marketing plans, systems or processes designed to limit access, and information provided to users, suggest that the AI app is accessible to children.

If an AI app is only made available to users aged 18 and over, it is not directed at children. Consider your target audience and be clear about what age group you intend to allow access to your AI app. If it is not intended for children, take steps to prevent their access.

Compliance with AI / privacy rules when processing children’s data

Who provides consent to processing: a child or a parent?

In short, it depends on two factors: the age of the child and the child’s country of residence. If you rely on consent as your lawful basis for processing in the context of offering an online service directly to a child, then you need to get parental authorization for every child up to 12 years old. This rule equally applies irrespective of whether the child is in an EU/EEA member state, the UK or the USA.

After 16 years of age, minors can solely provide consent to the processing of their personal data; parental consent is not required.

In many cases, the most questionable interval remains the age of 13-15 as it greatly depends on the particular national privacy legislation and its requirements to consent, as shown in the table below.

Age of consent in EU/EEA countries

The GDPR requirements to the age of consent need separate analysis due to the variety and complexity of this topic.

Under the GDPR, the minimum age to a child’s consent to information society services is 16, but EU/EEA member states may provide for a lower age for such purposes, provided that the lower age is not below 13 years. In all those cases where a minor is not entitled by law to provide his/her consent to the operator of the AI app, the GDPR requires that the operator of the AI app obtains parental consent before collecting and processing any personal data of the child.

Some EU/EEA member states follow the standard age recommendation of the GDPR in relation to the offering of information society services directly to a minor. Thus, processing of the personal data of a child is lawful where the minor is at least 16 years old and gives his or her consent to processing in such countries as Ireland, the Netherlands, Croatia, Poland, Romania.

At the same time, quite a lot of EU/EEA countries have chosen to lower the threshold age for a child’s consent to: 

How to obtain parental consent

Article 8(1) GDPR provides two options: you either collect the consent directly from the child’s parents, or the parents have to authorize the data subject’s consent.

There are various methods to check the identity and collect consent, including:

  • the provision of a passport or ID copy via email
  • the provision of a consent or authorization letter signed by the parents via email
  • the processing of online orders through the parents’ credit card
  • the parents’ consent or authorization is expressed via phone
  • a double-opt-in method could also serve for this purpose

Age assurance methods and technologies for age verification

By setting an age limit above which children can, in certain cases, validly consent to the processing of their own data, Article 8(2) GDPR implicitly establishes the need to verify age. Online service providers have an obligation to check age and parental consent and must make reasonable efforts to do so, taking into account the technologies available. 

A good example of the age assurance method which could have been implemented (it wasn’t) is the case Garante per la protezione dei dati personali – 9852212. In this case, the Italian data protection authority established that an AI-powered chatbot Replika app had no age verification procedure in place when users created an account. Moreover, the controller had not implemented any blocking method for users who could be believed to be underage during the use of Replika, e.g. based on the content of their responses. This case provides a valuable solution that banning or blocking mechanisms could be triggered when a user declares explicitly that he or she is underage. 

You should use measures that suit the specifics of your individual service. A method must be appropriate to the risks that arise from your data processing:

  • Self-declaration: a user simply states their age but does not provide any evidence to confirm it.
  • Artificial intelligence (AI): it may be possible to make an estimate of a user’s age by using AI to analyze the way in which the user interacts with your service.
  • Third-party age verification services: such services typically work on an ‘attribute’ system where you request confirmation of age or age range and the service provides you with a ‘yes’ or ‘no’ answer.
  • Account holder confirmation: confirmation of user age from an existing account holder who you know to be an adult.
  • Technical measures that discourage false declarations of age, or identify and close under-age accounts. 
  • Hard identifiers: formal identification documents or ‘hard identifiers’ such as a passport.
Source

AI bias and discrimination

Bias and discrimination of the AI apps can harm children when used in the age estimation or verification processes. For instance, age estimation using biometrics algorithms may be inaccurate for minors and conclude that an individual is younger than he or she actually is. Thus a child may be rejected in using certain AI apps due to age limitations of the app, which could create a problem for the company if their target audience was being prevented from accessing their product. 

Bias may particularly be true for children with medical conditions or children who are black, brown, latino or are a person of color, in which cases using biometrics may perform especially poorly. To address this issue, developers should incorporate reasonable adjustments for disabled individuals and provide a process for challenging incorrect age decisions made by AI algorithms.

Another issue is bias and inaccuracy stemming from inadequate training data. AI app developers should use high-quality and relevant datasets to train their algorithms and consider capture bias when using biometric data.

As a general recommendation, operators of the AI apps must consider the requirements of applicable laws to prevent discriminatory outcomes of the age estimation or verification processes. They should offer alternatives for age verification to accommodate children to ensure fair processing of children’s personal data.

Preventing the detrimental use of children’s data

You should ensure that all optional uses of personal data, such as AI training, are off by default and only activated after valid consent is obtained from the user (child’s parent or guardian where applicable, or child – if the child has reached the required age).

Precise geolocation and profiling of children should not be used either unless that is the key functionality or the purpose of an AI app.

Being transparent in child-friendly AI apps

Proactive steps can be taken to promote genuine transparency that make the AI app friendly  towards children and safer for them to use in respect of protecting their privacy rights. Examples of child-friendly AI app design include

  • Offering transparency information at different levels of difficulty, catering to users’ abilities rather than just age groups (e.g. beginner, intermediate, expert).
  • Using diverse communication methods for privacy information, such as age-appropriate videos, graphics, and bite-sized content with engaging storylines, in-game pop-ups, or messages, tailored to various age groups.
  • Considering the average data literacy levels of children and parents and providing resources to assist them in understanding privacy information.
  • Unbundling privacy information for enhanced engagement and comprehension, avoiding the urge to consolidate all details in one place. Instead, provide relevant information at key points where it is most needed and useful.
Example of a parental gate: JumpApp labels the parent section, and will only open it if the user presses and holds (not just taps)

Child-centric AI

In its report “Designing data transparency for children”, the UK’s Information Commissioner’s Office (ICO) summarised transparency best practices:

  • Presenting privacy information in a child-friendly manner, using language, icons, or media suitable for the age of child users.
  • Positioning privacy information within the user experience to responsibly engage children’s attention.
  • Providing solutions that offer bite-sized, on-demand privacy information for children when needed.
  • Always prioritizing the best interests of children in the design of privacy information.
Source (p. 38, 40)

Sharing data with third parties

Sharing children’s data with third parties, e.g. service providers, requires extra care as it can expose underage users to unintended risks if not done properly.

The golden rule derived by the ICO states as follows: “Do not disclose children’s data unless you can demonstrate a compelling reason to do so, taking account of the best interests of the child.” Safeguarding children or sharing data for the sake of official statistics are examples of good compelling reasons named by the ICO. By contrast, selling children’s personal data for commercial reuse is not a good example of a compelling reason to share children’s data.

Given that data sharing of children’s data concerns a vulnerable category of data subjects, it is a good practice to conduct a Data Protection Impact Assessment (DPIA). A DPIA will evaluate and mitigate potential risks to the rights of minors. We explore DPIAs further along in this article.

You should carefully assess a third party with whom you are going to share data. Ask them to provide you with their data security certification and privacy policies for review and pay special attention to the documented security measures. 

If you have checked your service provider and decided to share the personal data of underage users with them, you need to conclude a Data Processing Agreement (DPA). A DPA will ensure the safety of data sharing and the protection of children’s rights in particular. It is important to stipulate in the DPA that improvement of the service provider’s services based on children’s data is prohibited.

It is your obligation as of the operator of the AI app to specify the purposes of sharing and the recipients of shared children’s data in your privacy notice.

📚 Learn more: what is DPF self-certification for businesses?

Data protection impact assessments

According to Art. 35(1) GDPR, you must do a DPIA where the type of processing you apply is likely to result in a high risk to the rights and freedoms of individuals. Both the Article 29 Data Protection Working Party (a predecessor of the European Data Protection Board) and the ICO consider that processing of data concerning vulnerable data subjects may result in a high risk. Particularly, the ICO states that targeting children, e.g. if you intend to offer online services directly to children, is one of the high risk criteria.

At a very early stage, long before implementing your AI app, you should evaluate and mitigate risks to the rights and freedoms of children and conduct a DPIA. A DPIA will help draw attention to important areas that you may need to address. It will also help you achieve the following goals:

  • Identify potential risks associated with the AI app’s operations.
  • Assess the AI app’s impact on children, considering various ages, capacities, and needs.
  • Ensure  that you consult with both parents and minors with respect to their vision on the use of the AI app.
  • Evaluate any residual risks to determine if consultation with a data protection authority is required.
  • Seek a data protection officer’s (DPO) opinion of the DPIA’s findings and recommendations.

Conclusions

AI app developers must take extra precautions when their apps target children. Data handling requires extreme care, and adherence to regulations like the EU/UK GDPR and COPPA is advisable. These regulations require parental consent and advocate minimal data collection. They may also emphasize the need for age-appropriate notices regarding data collection and sharing.

Compliance checklist

Here are the specific steps you can proactively take to ensure your AI application is compliant with regulations regarding the children's data:

  • Describe the types of children’s personal information processed online, the purpose, and how it is handled.
  • List all operators processing this data, including third-party operators like social plugins, widgets, and ad networks.
  • Explain parental rights regarding their child’s data and the procedures to exercise these rights.
  • Notify parents directly and obtain verifiable consent before collecting children’s personal information.
  • Grant parents access to their child’s data for review or deletion.
  • Allow parents to withdraw consent and halt further processing of a child’s data.
  • Maintain data confidentiality, security, and integrity, ensuring third parties can do the same.
  • Retain personal information collected from children only as long as is necessary to fulfill the purpose for which it was collected; delete when no longer necessary.
  • Avoid making a child’s access to an online activity contingent on providing excessive information beyond what is reasonably required.
  • Ensure AI training is off by default and precise geolocation and profiling of children is not used.
  • Prioritize the best interests of children in the design of your AI app.
  • As good practice, do a DPIA at an early stage, like before implementing your AI app.
  • Assess third parties with whom you are going to share children’s data and conclude DPAs with them.

Great resources to stay compliant when processing child data

We’ve collected a few additional resources that we think are very helpful and informative:

How Legal Nodes can help

The privacy team at Legal Nodes have worked with multiple clients who need to become compliant with UK, EU, US and European national privacy laws. We support clients with:

  • becoming privacy notice compliant with GDPR/UK GDPR/COPPA/European national laws. This means that we create privacy notices for your AI app and app stores
  • privacy interface recommendations, including parental control
  • recommendations for obtaining parental consent
  • vendor assessment
  • Data Protection Impact Assessments (DPIAs)
  • Data Protection Agreements (DPAs) with third parties like service providers

As the number of AI apps and apps with AI features continues to grow, we’re here to help founders, developers, and businesses remain compliant and proactively protect their users’ and their users’ data. For help understanding which data capture and data processing regulations apply to your circumstances, speak to a privacy specialist at Legal Nodes today.

Kickstart your privacy compliance process

Book a free call

Anna is a Privacy Associate at Legal Nodes. With a background in startup and IT law, she combines her experience with work in privacy and data protection. Anna is passionate about music and exploring the world. She lived 6 years in Finland while studying there and visited 20+ countries.

Explore popular resources