All articles
10
min
ISO 42001 and AI Governance

The AI Act Explained: Risk Classification, Roles, and Obligations

The European regulation on artificial intelligence is no longer a distant prospect. The AI Act has been in effect since August 1, 2024. And the countdown to the most stringent requirements—those concerning high-risk AI systems—ends on August 2, 2026. There are less than five months left for companies that have not yet addressed this issue.

Yet many organizations continue to view this legislation from a distance, convinced that it does not directly affect them. This is a mistake. The AI Act does not merely regulate technologies; it regulates actors, responsibilities, and roles within a value chain. And within this chain, virtually all companies that use, purchase, deploy, or distribute AI are affected.

Let’s break down what this regulation actually means in practice, beyond the simplified explanations.

A historic text, but one that is often misunderstood

The AI Act represents a world first: a comprehensive, structured regulatory framework that applies directly to all AI systems and stakeholders within the European Union. Its ambition goes far beyond technical compliance. The Act seeks to establish the conditions for a more robust, responsible, and sustainable development of AI.

But reducing the AI Act to a checklist of requirements would be a strategic mistake. It challenges us to rethink AI governance within the organization—a governance framework that, when well-designed, becomes a driver of performance and differentiation, not a hindrance.

The level of risk does not depend solely on the technology. It depends above all on how it is used.

The 4 risk levels: an impact-based approach

The core of the framework is based on a classification of AI systems according to their potential risk to people’s safety, health, and fundamental rights.

AI Act - Decision Tree

🔴 Unacceptable risk — Total ban (effective February 2025)

These systems are simply prohibited. They undermine the fundamental values of the European Union.

Examples include:

  • Social rating of individuals by government authorities
  • Exploitation of people’s vulnerabilities (age, disability, economic situation)
  • Behavioral or cognitive manipulation without the individuals' knowledge
  • Emotion recognition in professional or educational settings

🟠 High risk — Strict regulations (full implementation: August 2, 2026)

These systems are not inherently dangerous, but their potential impact on individuals places significant demands on them. This is where most companies will need to act quickly.

Examples include:

  • Recruitment tools or automated candidate assessment tools
  • Credit scoring algorithms or algorithms for access to financial services
  • Medical diagnostic support systems
  • Systems used in the justice system, law enforcement, or critical infrastructure
  • Student Assessment Tools in Education

🟡 Specific risk related to transparency — Reporting requirements (effective August 2025)

These systems are free to use, but must clearly inform users that they are interacting with an AI.

Examples include:

  • Chatbots and virtual assistants
  • Deepfakes and synthetic content
  • Any AI-generated content intended for the public

🟢 Minimal risk — Safe to use

These systems are not subject to any specific requirements, but the voluntary adoption of best practices is strongly recommended.

Examples: spam filters, content recommendation systems, internal optimization AI that has no direct impact on individuals.

The key distinction: AI systems vs. general-purpose AI (GPAI)

This is one of the most commonly misunderstood points, yet it is essential for determining your actual obligations.

An AI system is an operational application designed for a specific purpose: an automated recruitment tool, a scoring algorithm, or a diagnostic system. It is at the heart of the regulatory framework.

A general-purpose AI (GPAI) model, such as GPT or image-generation models, is a core technology that can be reused in multiple contexts. It is not limited to a single use. The requirements that have applied to such models since August 2025 primarily concern transparency, documentation, and compliance with copyright laws regarding training data.

What this means for you: If your company uses a GPAI model to build an internal or commercial AI system, you are subject to both the obligations related to the model AND those related to the system. The two sets of requirements apply cumulatively.

Your position in the value chain determines your obligations

The AI Act does not treat all stakeholders the same. It identifies five roles, each carrying a different level of responsibility.

Role Definition Level of obligation
Supplier Develops an AI system or brings it to market The highest
Deployer Uses an AI system in its operations High (high risk)
Distributor Makes a system available without being a developer Moderate
Agent Represents a non-European supplier in the EU Moderate
Importer Introduces a third-party AI system to the European market Moderate

One point that is often overlooked: a single company can fulfill multiple roles simultaneously, depending on the systems it uses or sells. A B2B SaaS company that integrates AI into its product acts both as a provider to its customers and as a deployer for its internal use. This dual role creates cumulative obligations.

The calendar you can no longer ignore

The implementation of the AI Act is gradual, but the pace is picking up:

  • August 1, 2024 — Official effective date
  • February 2, 2025 — Ban on AI systems posing an unacceptable risk: in effect
  • August 2, 2025 — Requirements for GPAI models and transparency rules: effective
  • 🔴 August 2, 2026 — Full implementation for high-risk systems (Annex III): in less than 5 months
  • August 2, 2027 — Expansion to products incorporating high-risk AI (medical devices, vehicles, etc.)

The penalties for non-compliance are significant: up to €35 million or 7% of global annual revenue for the most serious violations.

Why AI governance is an advantage, not a constraint

It would be simplistic to view the AI Act solely as a regulatory burden. Organizations that prepare for this transformation reap tangible benefits:

  • Defining responsibilities: who decides, who approves, and who is accountable for the use of AI
  • Risk prioritization: Not all AI systems are created equal—governance allows us to focus our efforts where the impact is greatest
  • Stakeholder trust: customers, partners, investors, and regulators are placing increasing trust in organizations that demonstrate mastery of their AI
  • Access to new markets: Public tenders and demanding B2B contracts are increasingly incorporating AI compliance criteria
  • Sustainable regulatory alignment: robust governance enables organizations to adapt to future regulatory changes without having to start from scratch

ISO 42001, the first international standard for AI management, serves as a key enabler in this regard. It enables organizations to establish this governance framework within a proven structure, drawing on best practices in risk management.

Key takeaways

The AI Act is neither a theoretical document nor a distant concern. It is in effect, its implementation is accelerating, and the window of opportunity to prepare is narrowing. The right question is not “Are we affected?” Virtually all companies that use AI are affected. The right question is: “What is the actual use of our AI systems, and what impacts might they have?”

It all starts with this answer.

👉 Reserve your spot for the next webinar: How to Secure Your AI Applications from A to Z

Join our newsletter
Cybersecurity tips, analyses and news delivered to your inbox every month! 
Learn more about our privacy policies.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
More content

Our latest Blog posts