Understanding the AI Act: Ensuring AI Competence in Europe

AICompetence : Understanding the AI Act: Ensuring AI Competence in Europe

The AI Act is a significant regulation developed by the EU legislators, which is increasingly impacting the use of artificial intelligence, especially in Europe. Companies and public authorities must stay informed about what is relevant to them. A crucial obligation is ensuring that their personnel possesses sufficient AI competence.

The AI Act mandates AI competence beginning February 2, 2025, as part of its first phase, where sections in Chapters I and II become effective. Chapter I includes general provisions, such as the scope of regulation and definitions, along with Article 4, which addresses AI competence. Chapter II contains only Article 5, which outlines “Prohibited Practices in the AI Field,” including bans on AI applications like social scoring systems.

Who must comply with the AI competence requirement? According to Article 4 of the AI Act, “Providers and operators of AI systems must take measures to ensure that their personnel and others involved in operating and using AI systems on their behalf possess adequate AI competence.” This includes considering their technical knowledge, experience, training, and the context in which AI systems will be used.

The requirement applies to providers and operators of AI systems. A provider is an individual or entity that develops or markets an AI system under its own name or brand, whether for a fee or free of charge. Determining the “AI developer” can be straightforward, but it may be complex in some cases.

Article 3, No. 4 of the AI Act defines an “operator” as an individual or entity that uses an AI system under its responsibility, unless the system is used for personal, non-professional activities. Given the broad scope of the AI Act, it is likely that almost every company and authority will eventually be considered an operator of AI systems. This means they must ensure their personnel or users of AI systems on their behalf have sufficient AI competence.

What is AI competence? AI competence involves critically evaluating AI technologies and using them effectively across various life areas. This includes understanding technical applications, measures, and interpretations of AI systems, as well as knowing how AI decisions affect people. The specific purpose of use is important, making it difficult to prescribe a standard approach for implementing this requirement in a company or authority.

The AI competence requirements are dynamic. Only an “adequate level” of competence is needed, and providers or operators must ensure this “to the best of their ability.” A European AI Board will support the EU Commission in promoting AI competence tools and raising public awareness. The EU Commission and member states will collaborate with stakeholders to develop voluntary codes of conduct to promote AI competence. However, it is unclear when concrete results, such as codes of conduct, will be available. Until then, affected companies and authorities may rely on other support if they cannot establish and impart sufficient AI competence themselves.

Due to the lack of specific guidelines on what AI competence entails, the consulting industry has already begun offering courses for training as AI officers or AI managers. Law firms describe relevant aspects as they see fit. There is no single solution for imparting sufficient AI competence. Ultimately, tailored answers and concepts for the specific company or authority are necessary. Efforts to enhance AI competence should be well-documented.

What is the significance of risk classifications concerning AI competence? The AI Act classifies AI systems into prohibited, high-risk, limited-risk, and minimal or no-risk categories. High-risk AI systems are deemed highly risky for the health, safety, or fundamental rights of EU citizens, but their significant socioeconomic benefits outweigh these risks.

Operators, and thus their employees or officials, of high-risk AI systems require special AI competencies. They must be able to make informed AI-related decisions and have the authority to shut down a system in emergencies. Article 26, paragraph 2 of the AI Act specifies the obligations of operators of high-risk AI systems concerning AI competence: “Operators assign individuals with the necessary competence, training, and authority for human oversight and provide them with the required support.” This implies that individuals without the necessary AI competence should not operate high-risk AI systems.

High-risk AI systems should be developed to allow individuals to monitor their functionality and ensure proper use. Providers must establish measures for human oversight before marketing, including operational restrictions and the ability to respond to human operators. The legislation envisions a balance between human AI competence and the presence of technical intervention, control, and stop mechanisms when using high-risk AI systems.

What are the consequences of violating the AI competence requirement? Violations can have significant consequences, including liability claims and compensation demands. As with other regulations, the AI Act may hold executives personally liable if they fail to meet their obligations regarding AI competence or inadequately delegate this responsibility within their organization.

Additionally, fines and other sanctions may be imposed under national sanction catalogs that are being developed. It is only a matter of time before it becomes clear at the national level what fines or other penalties companies and authorities will face if they fail to comply with the AI competence requirement.