The AI Act is designed as a horizontal legal framework to complement sectoral regulations and other digital laws, but it is not well-aligned with these existing rules. This is the key finding of a study by law professor Philipp Hacker from Frankfurt/Oder for the Bertelsmann Foundation. Many AI applications that fall under the comprehensive requirements of the AI Act are already subject to other regulations.
Hacker highlights the General Data Protection Regulation (GDPR) and the Digital Services Act (DSA) as examples. The AI Act also has a “tension” with rules in sectors like finance, medicine, and automotive, especially regarding AI-based credit scoring, diagnostic systems, or autonomous driving features.
The AI Act uses a broad, risk-based approach to categorize AI applications by their risk potential and impose strict requirements on potentially dangerous systems. It applies to companies both within and outside the EU if their AI systems are offered or used within the EU. By August 2026, businesses and member states need to implement the AI Act gradually. However, some parts are not yet cohesive, leading to inconsistencies, overlaps, and uncertainties that could hinder smooth implementation and create legal uncertainties.
Hacker suggests that implementation regulations and guidelines could help resolve these issues. He sees potential conflicts with the DSA and AI Act’s risk analysis obligations, particularly for platforms integrating generative AI technologies like large language models. There are currently no clear rules on reusing personal data for AI training, complicating compliance with both the GDPR and the AI Act.
Privacy International believes that models like GPT, Gemini, or Claude have been trained using personal information without sufficient legal basis, failing to uphold GDPR data subject rights. In finance, different data protection requirements could complicate AI-driven risk analyses. In the automotive industry, integrating driver assistance systems into existing product safety and liability regulations poses a dual regulatory challenge. In healthcare, conflicting regulations could slow the spread of AI-based medical applications, such as cancer detection or tools for drafting medical reports.
In the short term, the author recommends better alignment of existing regulations to avoid redundancies and increase efficiency. Some progress has been made in the AI Act, particularly in financial institutions with quality management systems. The EU Commission could encourage similar cooperation through implementation regulations. National supervisory authorities should also issue guidelines for applying the AI Act in specific sectoral contexts.
Long-term, national and European approaches are needed to harmonize AI regulation with other legal frameworks and eliminate contradictions sustainably. Frameworks should be regularly reviewed to adequately consider technological and societal developments.