The AI Act, known as the EU’s Artificial Intelligence Regulation, came into effect on August 1, 2024, but is not yet fully operational. The EU legislator has planned a phased implementation to give companies and authorities using artificial intelligence, as well as the supervisory authorities responsible for enforcement, enough time to adapt to the complex regulatory requirements. This is welcomed by many due to the 113 articles, 180 recitals, 144 pages of official legal text (German version), and an estimated 70 implementing and delegated legal acts to be issued outside the actual AI Act, according to the EU Commission.
The definition of what constitutes an AI system in the new EU AI regulation is extremely broad, as is the term “operator” of an AI system. Consequently, almost all companies and authorities will eventually fall under the AI Act’s regulations. From February, this means they must have an adequate level of AI competence when using AI.
The AI Act requires special AI competencies from operators of so-called high-risk AI systems. They must be able to make informed decisions about AI and have the authority to shut down a system in an emergency. As with other regulations, the AI Act can also impose personal liability on the managing director if they fail to properly fulfill their obligations regarding the legal requirements for AI competence.
On February 2, 2025, the first stage of the AI Act will take effect, and the first sections, Chapters I and II, will become operational. The first chapter includes general provisions such as the subject of regulation, its scope, definitions, and rules on AI competence in Article 4. Chapter II consists only of Article 5, which lists “prohibited practices in the AI sector.”
The AI Act’s broad definition means that almost all entities using AI will need to comply with the regulations, and they must ensure they have the necessary AI competence. This includes understanding the systems they use and being able to make informed decisions about them. For high-risk AI systems, operators must have the authority to intervene and shut down systems if necessary.
Personal liability is a significant concern for managing directors, as failing to comply with the AI competence requirements could lead to legal consequences. This highlights the importance of understanding and implementing the AI Act’s requirements effectively.
Overall, the AI Act aims to ensure that AI is used responsibly and safely across the EU, with a focus on protecting consumers and maintaining trust in AI technologies. As the regulations become fully operational, companies and authorities will need to prioritize AI competence and compliance to avoid potential liabilities and ensure they can effectively manage their AI systems.
This regulation represents a significant step in the EU’s approach to AI, emphasizing the need for comprehensive understanding and management of AI technologies in various sectors. As the AI landscape continues to evolve, the AI Act will play a crucial role in shaping the responsible use of AI in the EU.