
The European regulation on artificial intelligence is no longer a distant prospect. The AI Act has been in effect since August 1, 2024. And the countdown to the most stringent requirements—those concerning high-risk AI systems—ends on August 2, 2026. There are less than five months left for companies that have not yet addressed this issue.
Yet many organizations continue to view this legislation from a distance, convinced that it does not directly affect them. This is a mistake. The AI Act does not merely regulate technologies; it regulates actors, responsibilities, and roles within a value chain. And within this chain, virtually all companies that use, purchase, deploy, or distribute AI are affected.
Let’s break down what this regulation actually means in practice, beyond the simplified explanations.
The AI Act represents a world first: a comprehensive, structured regulatory framework that applies directly to all AI systems and stakeholders within the European Union. Its ambition goes far beyond technical compliance. The Act seeks to establish the conditions for a more robust, responsible, and sustainable development of AI.
But reducing the AI Act to a checklist of requirements would be a strategic mistake. It challenges us to rethink AI governance within the organization—a governance framework that, when well-designed, becomes a driver of performance and differentiation, not a hindrance.
The level of risk does not depend solely on the technology. It depends above all on how it is used.
The core of the framework is based on a classification of AI systems according to their potential risk to people’s safety, health, and fundamental rights.

These systems are simply prohibited. They undermine the fundamental values of the European Union.
Examples include:
These systems are not inherently dangerous, but their potential impact on individuals places significant demands on them. This is where most companies will need to act quickly.
Examples include:
These systems are free to use, but must clearly inform users that they are interacting with an AI.
Examples include:
These systems are not subject to any specific requirements, but the voluntary adoption of best practices is strongly recommended.
Examples: spam filters, content recommendation systems, internal optimization AI that has no direct impact on individuals.
This is one of the most commonly misunderstood points, yet it is essential for determining your actual obligations.
An AI system is an operational application designed for a specific purpose: an automated recruitment tool, a scoring algorithm, or a diagnostic system. It is at the heart of the regulatory framework.
A general-purpose AI (GPAI) model, such as GPT or image-generation models, is a core technology that can be reused in multiple contexts. It is not limited to a single use. The requirements that have applied to such models since August 2025 primarily concern transparency, documentation, and compliance with copyright laws regarding training data.
What this means for you: If your company uses a GPAI model to build an internal or commercial AI system, you are subject to both the obligations related to the model AND those related to the system. The two sets of requirements apply cumulatively.
The AI Act does not treat all stakeholders the same. It identifies five roles, each carrying a different level of responsibility.
One point that is often overlooked: a single company can fulfill multiple roles simultaneously, depending on the systems it uses or sells. A B2B SaaS company that integrates AI into its product acts both as a provider to its customers and as a deployer for its internal use. This dual role creates cumulative obligations.
The implementation of the AI Act is gradual, but the pace is picking up:
The penalties for non-compliance are significant: up to €35 million or 7% of global annual revenue for the most serious violations.
It would be simplistic to view the AI Act solely as a regulatory burden. Organizations that prepare for this transformation reap tangible benefits:
ISO 42001, the first international standard for AI management, serves as a key enabler in this regard. It enables organizations to establish this governance framework within a proven structure, drawing on best practices in risk management.
The AI Act is neither a theoretical document nor a distant concern. It is in effect, its implementation is accelerating, and the window of opportunity to prepare is narrowing. The right question is not “Are we affected?” Virtually all companies that use AI are affected. The right question is: “What is the actual use of our AI systems, and what impacts might they have?”
It all starts with this answer.
👉 Reserve your spot for the next webinar: How to Secure Your AI Applications from A to Z