
The European regulation on artificial intelligence is no longer a distant prospect. The AI Act has been in effect since August 1, 2024. The initial timeline called for the most stringent requirements (those concerning high-risk AI systems) to take effect on August 2, 2026. However, on March 26, 2026, as part of the Digital Omnibus, the European Parliament voted to postpone this date to December 2, 2027. This vote is not yet final: the Council of the EU has yet to take a position, and trilogues will begin in the spring of 2026. August 2026 remains the legal date until the revised text is formally adopted.
Yet many organizations continue to view this legislation from a distance, convinced that it does not directly affect them. This is a mistake. The AI Act does not merely regulate technologies; it regulates actors, responsibilities, and roles within a value chain. And within this chain, virtually all companies that use, purchase, deploy, or distribute AI are affected.
Let’s break down what this regulation actually means in practice, beyond the simplified explanations.
The AI Act represents a world first: a comprehensive, structured regulatory framework that applies directly to all AI systems and stakeholders within the European Union. Its ambition goes far beyond technical compliance. The Act seeks to establish the conditions for a more robust, responsible, and sustainable development of AI.
But reducing the AI Act to a checklist of requirements would be a strategic mistake. It challenges us to rethink AI governance within the organization—a governance framework that, when well-designed, becomes a driver of performance and differentiation, not a hindrance.
The level of risk does not depend solely on the technology. It depends above all on how it is used.
The core of the framework is based on a classification of AI systems according to their potential risk to people’s safety, health, and fundamental rights.

These systems are simply prohibited. They undermine the fundamental values of the European Union.
Examples include:
These systems are not inherently dangerous, but their potential impact on individuals places significant demands on them. This is where most companies will need to act quickly.
Examples include:
⚠️ On March 26, 2026, the European Parliament proposed postponing this date to December 2, 2027 (Annex III) via the Digital Omnibus. This amendment has not yet been definitively adopted: the Council of the EU must still give its approval before the trilogues begin.
These systems are free to use, but must clearly inform users that they are interacting with an AI.
Examples include:
⚠️ The European Parliament proposes to postpone the digital labeling requirements for AI-generated content until November 2, 2026 (Article 50(2)). The other transparency requirements under Article 50 remain in effect starting in August 2026.
These systems are not subject to any specific requirements, but the voluntary adoption of best practices is strongly recommended.
Examples: spam filters, content recommendation systems, internal optimization AI that has no direct impact on individuals.
This is one of the most commonly misunderstood points, yet it is essential for determining your actual obligations.
An AI system is an operational application designed for a specific purpose: an automated recruitment tool, a scoring algorithm, or a diagnostic system. It is at the heart of the regulatory framework.
A general-purpose AI (GPAI) model, such as GPT or image-generation models, is a core technology that can be reused in multiple contexts. It is not limited to a single use. The requirements that have applied to such models since August 2025 primarily concern transparency, documentation, and compliance with copyright laws regarding training data.
What this means for you: If your company uses a GPAI model to build an internal or commercial AI system, you are subject to both the obligations related to the model AND those related to the system. The two sets of requirements apply cumulatively.
The AI Act does not treat all stakeholders the same. It identifies five roles, each carrying a different level of responsibility.
One point that is often overlooked: a single company can fulfill multiple roles simultaneously, depending on the systems it uses or sells. A B2B SaaS company that integrates AI into its product acts both as a provider to its customers and as a deployer for its internal use. This dual role creates cumulative obligations.
The implementation of the AI Act is gradual, but the pace is picking up:
The penalties for non-compliance are significant: up to €35 million or 7% of global annual revenue for the most serious violations.
📌 Important — Status as of April 2026: The European Parliament adopted its position on March 26, 2026 (569 votes in favor). These new dates have not yet been finalized: they must be approved by the Council of the EU during the trilogues scheduled for spring 2026. August 2026 remains the legally binding date until the revised text is formally adopted. It is advisable to plan ahead without waiting for the final decision.
It would be simplistic to view the AI Act solely as a regulatory burden. Organizations that prepare for this transformation reap tangible benefits:
ISO 42001, the first international standard for AI management, serves as a key enabler in this regard. It enables organizations to establish this governance framework within a proven structure, drawing on best practices in risk management.
The AI Act is neither a theoretical document nor a distant concern. It is in effect, its implementation is accelerating, and the window of opportunity to prepare is narrowing. The right question is not “Are we affected?” Virtually all companies that use AI are affected. The right question is: “What is the actual use of our AI systems, and what impacts might they have?”
It all starts with this answer.
👉 Watch the recording of our webinar: How to Secure Your AI Applications from A to Z
Your regulatory obligations are changing. Let’s work together to identify your compliance priorities.
Let's discuss your compliance →