📢THE EU's NEW AI GUIDELINES: UNDERSTANDING THE BAN ON PROHIBITED AI PRACTICES
- PCV LLC
- Feb 4
- 2 min read

On 4 February 2025, the European Commission issued official guidelines clarifying the implementation of prohibited AI practices under the EU AI Act (Regulation (EU) 2024/1689). These guidelines provide legal clarity on AI systems that pose unacceptable risks to fundamental rights and EU values. With enforcement starting on 2 February 2025, businesses, AI developers, and regulators must ensure compliance to avoid severe penalties.
This blog explores the key takeaways from the new guidelines and what they mean for AI providers and users in the EU.
What Are Prohibited AI Practices?
The EU AI Act classifies AI systems based on risk, and Article 5 explicitly bans certain AI practices that are deemed too dangerous for fundamental rights. The latest guidelines confirm the following AI applications are prohibited:
Subliminal Manipulation & Deception - AI systems that use subliminal techniques to manipulate individuals beyond their awareness, potentially causing harm
Exploitation of Vulnerable Groups - AI that takes advantage of people based on age, disability, or socio-economic status to distort behaviour or decision-making
Social Scoring - AI systems used by public or private entities to classify individuals based on social behavior, personal characteristics, or economic status, leading to unfair or disproportionate consequences
Predictive Crime Profiling - AI systems that assess or predict the likelihood of individuals committing crimes based solely on profiling, personality traits, or past behavior
Mass Facial Recognition & Biometric Data Scraping - AI models created using untargeted scraping of facial images from the internet or CCTV footage, infringing on privacy rights
Emotion Recognition in Workplaces & Schools - AI tools used to infer emotions of employees or students unless explicitly justified for medical or safety purposes
Biometric Categorisation of Sensitive Data - AI that classifies people based on race, political views, trade union membership, religion, or sexual orientation
Real-Time Remote Biometric Identification (RBI) in Public Spaces - Live facial recognition technology used in publicly accessible areas by law enforcement, except in strictly defined cases such as terrorism prevention or locating missing persons

Who Needs to Comply?
AI Providers: Developers, manufacturers, and distributors of AI systems operating in or targeting the EU.
AI Deployers: Businesses, law enforcement, and public institutions using AI solutions.
Tech Companies: Particularly those offering biometric and surveillance-related AI products.
Non-compliance can lead to fines of up to €35 million or 7% of annual global turnover, making strict adherence a necessity for AI stakeholders.
Conclusion & Compliance Imperatives
The latest EU guidelines on prohibited AI practices represent a pivotal development in the regulatory landscape, underscoring the EU’s steadfast commitment to ethical AI governance and the protection of fundamental rights. These guidelines establish clear boundaries for AI applications, ensuring that innovation aligns with societal values and legal frameworks. For AI providers, businesses, and regulators, proactive compliance is imperative. Organisations must conduct comprehensive AI audits, align their practices with regulatory requirements, and implement risk mitigation strategies to prevent legal repercussions. The evolving nature of AI laws necessitates continuous monitoring and adaptation to maintain compliance and uphold trust in AI-driven solutions.
If you need expert guidance on aligning your AI systems with the EU AI Act, contact us at info@pelaghiaslaw.com.
Yorumlar