top of page

📢NAVIGATING THE NEW EU GUIDELINES ON THE DEFINITION OF AN AI SYSTEM: KEY TAKEAWAYS FOR BUSINESSES AND LEGAL PROFESSIONALS

  • Writer: PCV LLC
    PCV LLC
  • Feb 6
  • 3 min read

Updated: Feb 12

On the 6th of February 2025, the European Commission issued its long-awaited Guidelines on the Definition of an Artificial Intelligence (AI) System, clarifying the scope and application of the Regulation (EU) 2024/1689 (the "EU AI Act"). These guidelines are a crucial step toward ensuring consistency in the interpretation and enforcement of AI regulations across the European Union.


Below, we provide an overview of the key aspects of these guidelines and what they mean for businesses, developers, and legal professionals.


The Importance of AI System Definition


The EU AI Act, which entered into force on the 1st of August 2024, imposes a regulatory framework for AI systems, emphasising risk-based governance. However, not all software systems fall under its scope. The Commission's new guidelines focus on defining what constitutes an AI system under Article 3(1) of the EU AI Act. This definition plays a pivotal role in determining compliance obligations, especially regarding high-risk AI applications and prohibited practices.


The Seven Core Elements of an AI System


According to the guidelines, an AI system is a "machine-based system that operates with varying levels of autonomy, may exhibit adaptiveness after deployment, and infers how to generate outputs that influence physical or virtual environments.


This definition comprises seven key elements:

  • Machine-Based System – AI systems which rely on computational processes, integrating both hardware and software components

  • Varying Levels of Autonomy – These systems operate with some degree of independence from human involvement

  • Adaptiveness – While not a mandatory feature, some AI systems exhibit self-learning capabilities post-deployment

  • Objective-Driven – AI systems function based on explicit or implicit objectives, distinguishing them from traditional software

  • Inferencing Capabilities – AI systems use data inputs to generate outputs, such as predictions, recommendations or decisions

  • Influence on Environments – AI outputs impact physical or digital spaces, differentiating them from passive software tools

  • Context-Specific Impact – The AI system’s ability to interact with its surroundings determines its classification and potential risks


Distinguishing AI from Traditional Software


One of the most significant contributions of the guidelines is their clarification that not all automated systems qualify as AI systems. Traditional software based on predefined, rule-based logic without inferencing capabilities falls outside the scope of the EU AI Act. 


Examples of excluded systems include:

  • Basic data processing software (i.e. database management tools)

  • Simple statistical or mathematical optimisation models

  • Classical heuristics-based systems (i.e. traditional chess programs)

  • Static estimation systems with no adaptability (i.e. basic demand forecasting models)


Regulatory Implications for Businesses


Businesses deploying AI-powered solutions should assess whether their systems meet the EU AI Act’s definition of an AI system. If classified as such, they may be subject to additional regulatory obligations, especially if their AI applications fall into high-risk categories (i.e. biometric identification, recruitment tools, healthcare diagnostics). 


Key considerations include:

  • Compliance with AI risk classifications under the EU AI Act

  • Transparency obligations for certain AI applications (i.e. chatbots, generative AI systems)

  • Prohibited AI practices, such as subliminal manipulation and social scoring

  • Human oversight and accountability measures


Next Steps for Legal Professionals and AI Developers


The Commission’s guidelines underscore the need for legal and technical due diligence in AI deployment.


Businesses should:

  • Conduct AI system audits to determine compliance obligations

  • Develop internal governance frameworks to mitigate AI risks

  • Stay updated on future regulatory clarifications and enforcement actions


Conclusion


The Commission's guidelines provide much-needed clarity on the definition of AI systems under the EU AI Act, setting the stage for consistent regulatory enforcement. 


As businesses and legal professionals navigate this evolving framework, proactive compliance will be key to mitigating risks and harnessing AI’s transformative potential within the bounds of EU law.


If your business is developing or deploying AI-driven solutions, our firm can provide tailored legal guidance on regulatory compliance, risk assessment, and AI governance strategies.


Contact us at info@pelaghiaslaw.com to ensure your AI systems align with the latest EU standards.


Comments


bottom of page