AI Act: Prohibitions and Competency Requirements for AI Systems in the EU

AIAct : AI Act: Prohibitions and Competency Requirements for AI Systems in the EU

As of now, AI systems with unacceptable risks are prohibited. This includes applications like social scoring, where citizens are monitored. Although these systems have not existed in the EU so far, the AI Act, which came into effect last August, ensures that such systems will not be used in the future. The ban will be effective from February 2, 2025, as the law is implemented in stages.

In addition to prohibiting certain AI applications, other obligations also accompany this date. Employees are only allowed to use AI at the workplace if they are sufficiently competent. However, what “sufficiently competent” means is somewhat vague. According to Article 4 of the AI Act, “Providers and operators of AI systems take measures to ensure, to the best of their knowledge and belief, that their staff and other persons involved in the operation and use of AI systems on their behalf have sufficient AI competence.” Competence includes “technical knowledge, experience, education, and training,” and users must understand the context in which they intend to use AI.

This obligation applies to all companies, regardless of size or focus, and to all AI applications, regardless of the risk level they are categorized in. It is unclear who qualifies as an operator. It is uncertain whether employees using tools like ChatGPT in their daily work are affected or if it pertains to setting up their own chatbot, thereby operating an AI application. More information about AI competence can be found in our FAQ.

Regarding the regulation for General Purpose AI, initially, the AI Act does not provide for penalties if employees lack sufficient competence. Determining this might be challenging, but it is likely possible. Generally, penalties outlined in Article 99 are scheduled to begin on August 2, 2025. By then, member states must designate national authorities to oversee the AI Act.

For AI models with general purposes, known as General Purpose AI (GPAI), a guideline will be released in three months, describing the legally compliant handling of them. Three months later, twelve months after the AI regulation comes into effect, obligations for GPAI will also apply. This includes a technical documentation requirement, unresolved issues concerning copyright and AI, disclosure of training data, and more.

The AI Act requires an adequate level of AI competence. What this means is discussed further in the iX Magazine.