
Artificial intelligence is redefining the scope of cybersecurity. In the face of systems capable of learning, reasoning, and acting autonomously, traditional approaches are no longer sufficient. It is against this backdrop that the ISO/IEC 27090 standard has emerged—currently in the FDIS (Final Draft International Standard) stage since July 2025, the stage preceding its official publication. A reference framework designed specifically to address the threats and failures unique to AI systems.
Whether you’re a CISO, an AI solutions provider, or a compliance officer, this article provides the insights you need to understand the benefits of AI and plan for its integration into your security strategy.
Cybersecurity has historically been based on three fundamental pillars: confidentiality, integrity, and availability (the CID triad). These principles have shaped decades of best practices, from access management to business continuity plans.
But AI is fundamentally changing the game. Organizations are rolling out conversational assistants (ChatGPT, Copilot), autonomous agents, RAG (Retrieval-Augmented Generation) architectures, recommendation systems, and machine learning pipelines on a massive scale. These systems are no longer just software: they learn from data, interact with users, and can act autonomously.
This development introduces new attack vectors that traditional cybersecurity does not cover:
Securing AI therefore means applying cybersecurity principles to a new generation of digital systems—using methods and tools tailored to their specific characteristics.
The ISO/IEC 27090 standard (Cybersecurity — Artificial intelligence — Guidelines for addressing security threats and failures in artificial intelligence systems) has been at the FDIS stage since February 2026—a very advanced stage immediately preceding official publication. It is part of the ISO 27000 family of standards and complements an already established standards ecosystem:
The distinctive feature of ISO 27090 is clear: it focuses primarily on the technical security mechanisms and threats specific to AI models. It complements ISO 42001 — which addresses AI governance and covers, notably through its Annex A (A.6.2 on data for AI systems, A.6.5 on documentation), aspects of data traceability and the lifecycle — by providing additional technical and security depth, with a more operational focus on processes such as model training, continuous validation, and resistance to attacks.
💡 Note for compliance teams: ISO 42005, which addresses the impact assessment of AI systems, is directly linked to clause 6.1.4 of ISO 42001 and the requirements of the AI Act (Article 27 on the impact assessment on fundamental rights). Its omission from a compliance approach constitutes a significant blind spot.
For organizations certified to ISO 27001, ISO 27090 enables the extension of risk analysis to specific AI components: training data, models, and ML pipelines. For organizations with subsidiaries or clients outside the EU, the NIST AI RMF (AI Risk Management Framework) is an essential complementary standard, widely adopted in North America and increasingly used in Europe to ensure interoperability among standards.
Chapter 5 lays the groundwork: how to apply traditional information security (CIS) principles to AI systems throughout their lifecycle.
Key recommended practices include
💡 The autonomous operations of AI agents are challenging traditional identity-based Zero Trust tools. There is a growing trend toward models based on context and intent (intent-based security).
This chapter outlines the main attacks targeting AI models, their security implications, and the associated detection methods. For each type of attack, the standard describes the nature of the attack, its potential impacts, and the associated mitigation measures:

This mapping enables security teams to incorporate AI risks into their existing cybersecurity analyses by linking them directly to CID properties.
Chapter 7 describes security controls designed to mitigate the impact of attacks. It emphasizes a fundamental point: the security of an AI system must be viewed as a coherent whole, rather than as a series of independent safeguards. Certain measures may interact with one another or degrade model performance if they are not evaluated holistically.
The 10 main categories of measures covered:
ISO 27090 distinguishes between two key roles, a distinction consistent with the logic of the European AI Act:
The vendor develops or markets an AI system. It focuses on securing the model and its lifecycle: robustness, resistance to attacks, and the integrity of training data.
The integrator implements this system within a business context. They secure the integration environment, including data access, API exposure, business use cases, and monitoring. They must have a thorough understanding of the security challenges specific to these models in order to validate the systems they integrate and assume operational responsibility for them.
This dual approach is essential for any organization seeking to deploy AI systems in a reliable, AI Act-compliant, and auditable manner.
Despite its ambitious scope, the draft standard leaves several areas that require further exploration:
In France, ANSSI has published security recommendations specifically for generative AI systems, providing an operational framework that complements international standards. The France AI Hub also contributes to a comprehensive vision of AI security through its white papers and research summaries.
The security of AI systems now goes beyond the traditional CID framework. It extends to areas such as decision transparency, algorithmic accountability, explainability (XAI—Explainable AI), and the tangible impact on individuals.
For organizations, proactively adopting ISO 27090—in conjunction with ISO 42001 for governance, ISO 42005 for impact assessment, ISO 27001 for information security, and the NIST AI RMF for international contexts—enables the development of a robust, documented, and audited AI security framework. It also sends a strong signal to customers, partners, and regulators: the security of your AI systems is taken seriously, integrated from the design phase, and not added as an afterthought.
In a landscape where risks evolve as quickly as architectural innovations (agents, RAG, multimodal LLMs), this regulatory approach has become an essential prerequisite for any responsible AI deployment.
Would you like to learn more about this topic or assess your level of readiness for the requirements of ISO 27090? Check out our webinars dedicated to AI security in the enterprise.