📢EU AI ACT IN MOTION: KEY DEVELOPMENTS ACROSS THE EU
- PCV LLC
- Mar 31
- 3 min read

As the implementation of the EU AI Act enters a critical phase, regulatory discussions and policy directions are gaining momentum. This past week brought significant updates from both EU institutions and Member States, shedding light on the emerging legal and compliance landscape for artificial intelligence across Europe.
AI Board Convenes: Progress in Coordination and Implementation
On the 24th of March 2025, the AI Board held its third official meeting, chaired by Poland’s Secretary of State for Digital Affairs, Mr. Dariusz Standerski. Representatives from EU Member States gathered to exchange national strategies on AI Act implementation and to align on joint communication efforts. Executive Vice-President Henna Virkkunen presented the Commission’s evolving policy priorities, while the AI Office provided technical briefings on AI system definitions, prohibition guidelines, and the third draft of the Code of Practice for general-purpose AI. These deliverables are expected to shape the operational framework for compliance across the EU. Notably, a dedicated scientific panel call was also announced, underscoring the EU’s emphasis on evidence-based oversight.
Germany's AI Debate: A Coalition Divided
Meanwhile, in Germany, internal coalition disagreements signal divergent regulatory ambitions. Leaked negotiation documents reveal that the Christian Democrats (CDU/CSU) are advocating for a revision of the AI Act to reduce perceived burdens on businesses, while also pushing for broader digital sovereignty through future data legislation. In contrast, the Social Democratic Party (SPD) stands firm on advancing an AI Liability Directive at the EU level. Despite differing views, both parties support regulatory initiatives to fast-track data centre development, a cornerstone of Europe’s digital infrastructure strategy.
Lawmakers Push Back on Deregulatory Pressure
At the EU level, concerns have surfaced over attempts to weaken key provisions of the AI Act. As reported by the Financial Times, prominent MEPs involved in the Act’s negotiation have written to Commissioner Virkkunen, cautioning against turning vital parts of the law into voluntary commitments—a move perceived to be driven by lobbying from major US tech firms (OpenAI and Google). The proposed shift risks undermining the Act’s core purpose, particularly in mitigating risks around election interference, disinformation, and societal manipulation. The MEPs argue that such dilution would create legal uncertainty and compromise democratic safeguards.
Human Rights at Risk in the Code of Practice

In a detailed public statement, experts from the Centre for Democracy and Technology, Wadhwani AI Center, and UC Berkeley criticised the latest draft of the Code of Practice for failing to uphold human rights standards. Risk mitigation requirements for developers of general-purpose AI systems have been dramatically narrowed, rendering protections around privacy, discrimination, public health, and democratic integrity merely optional. The authors argue that this contradicts both the AI Act’s intent and international benchmarks, including the Hiroshima Code of Conduct.
Hungary’s Facial Recognition Plan Raises Alarm
Another key development relates to Hungary’s proposed use of facial recognition against participants of Pride events. Legal experts, including Dr. Laura Caroli, a key negotiator of the AI Act, have warned that such use violates Article 5 of the AI Act, which restricts real-time biometric surveillance in public spaces. Even when framed as a matter of national security, Hungary's proposed measures would likely be incompatible with EU law and could trigger infringement proceedings.
Copyright Gaps in the AI Code of Practice
Paul Keller from Open Future flagged concerns over the copyright chapter in the draft Code of Practice. While some progress has been made for open-source AI developers, the latest version narrows compliance obligations to web-scraped data only—excluding many other training data acquisition methods. The lack of clear performance indicators and transparency requirements further complicates accountability for AI developers and model trainers.
As these developments illustrate, the road to full AI regulation in the EU is complex and highly dynamic. With Member States aligning on implementation strategies, national politics influencing legal direction, and EU institutions under pressure from both industry and civil society, the coming months will be critical in shaping a balanced, enforceable, and rights-based AI framework.
For organisations operating in or entering the AI space, now is the time to prepare for compliance readiness, assess internal risk mitigation strategies, and follow the AI Act’s evolving guidance.
For tailored legal advice on AI Regulatory Compliance, contact our technology and AI department at info@pelaghiaslaw.com.
Commentaires