From February 2, 2025, providers and operators of AI systems, as well as companies whose employees use AI applications like ChatGPT, must ensure their workforce has the skills, knowledge, and understanding to use AI systems competently. This is mandated by the EU AI Act.
While this requirement might seem like an additional bureaucratic hurdle, it offers a chance to drive digital transformation within organizations and gain strategic advantages. The challenge lies in quickly implementing these changes amid the fast-paced developments in AI.
According to the regulation, employees should be aware of the opportunities, risks, and potential harms that AI applications can pose. This includes not just high-risk AI applications but any use of AI technology—from customer service to data analysis and ChatGPT.
The new rules bring more than just obligations; they open real opportunities. Companies can use these requirements to prepare their teams for the future and master the use of AI technologies. This strengthens not only the employees but also the competitive position of the company.
Instead of viewing regulation as a burden, companies could see it as a chance to finally tackle digital transformation. Well-trained teams can work more efficiently, develop creative solutions, and even discover new business areas. In short, the obligation could serve as a springboard for sustainable success.
But how can companies build AI competence? Here are three approaches to help foster AI skills among employees:
- Clear rules and standards: Unified guidelines and ethical principles provide orientation and security in dealing with AI. This ensures that all employees have the same foundation—a crucial step for responsible technology use.
- Regular and practical learning: Training formats are most effective when they occur continuously. Practical learning with real examples helps employees apply content directly and internalize it sustainably. Formats should also be adapted to participants’ knowledge levels to build skills effectively.
- Learning in teams: When employees from different fields like IT, law, and ethics collaborate, a comprehensive understanding of AI emerges. This interdisciplinary exchange fosters innovation and ensures technology is used sensibly and safely.
With a regular, practical approach to training and promoting collaboration, companies can not only meet legal requirements but also fully exploit AI’s potential in the long term.
The EU regulation leaves no room for non-compliance: companies must fulfill training obligations. Although there is no national implementing law or official supervisory authority yet, failing to ensure necessary training could be considered a breach of duty of care, potentially leading to liability claims.
The EU traditionally uses regulation to drive change, including in AI development. While many of the new regulation’s requirements might slow European companies in global competition, the training obligation offers a valuable perspective: it ensures employees build AI skills early, recognizing and utilizing the technology’s full potential.
However, regulation alone isn’t enough to keep up with the US and China. Without targeted investments in research, development, and infrastructure, Europe will lag behind. The obligation for AI training should be part of a broader strategy that not only prepares companies for legal requirements but also enhances their future viability. February 2, 2025, marks the start of a new phase of digital transformation and an opportunity to gain an early advantage.