AI Regulation and Data Protection in Europe

AI and Data Protection in Europe

The European Data Protection Board (EDPB) is not significantly hindering the development and use of Artificial Intelligence (AI) models. This is evident from a statement published by European data protection authorities regarding AI regulation in relation to the General Data Protection Regulation (GDPR).

According to the data protection authorities, companies like Meta, Google, and OpenAI can generally rely on “legitimate interest” as a legal basis for processing personal data through AI models. However, the EDPB attaches this approval to several conditions.

Three-Step Test

National data protection authorities are to use a three-step test to assess whether a legitimate interest exists. First, they should determine if the claim for data processing is legitimate. Next, they should conduct a “necessity test” to see if the data processing is necessary. Finally, they must weigh the fundamental rights of the affected individuals against the interests of the AI providers.

Regarding the weighing of fundamental rights, the EDPB emphasizes that “specific risks” to civil rights could arise during the development or deployment of AI models. To assess such impacts, supervisory authorities should consider “the nature of the data processed by the models,” the context, and “any other possible factors.” Essentially, “the specific circumstances of each case” must be taken into account.

For example, the committee mentions a voice assistant designed to help users improve cybersecurity. Such services could be beneficial to individuals and rely on a legitimate interest. However, this is only valid if the processing is absolutely necessary and all involved rights are balanced.

Clarifications on Anonymization

If personal data processed unlawfully is used in developing an AI model, the EDPB suggests that its use could be entirely prohibited unless everything is properly anonymized. For anonymization, it should be highly unlikely that individuals can be “directly or indirectly identified.” Additionally, it must be ensured that personal information cannot be extracted from the model through queries.

The EDPB established a task force around ChatGPT in mid-2023, prompted by the Irish Data Protection Commission. This was in response to a temporary ban on the system by the Italian data protection authority. With their joint statement, the data protectors aim to ensure uniform legal enforcement across the EU.

“We must ensure that these innovations are conducted ethically and safely, benefiting everyone,” emphasized EDPB Chair Anu Talus. The IT association CCIA welcomed the clarifications on legitimate interest, calling them “an important step towards greater legal certainty.”