All articles
8
min
ISO 42001 and AI Governance

ISO 27090: Understanding the Future Cybersecurity Standard for AI Systems

Artificial intelligence is redefining the scope of cybersecurity. In the face of systems capable of learning, reasoning, and acting autonomously, traditional approaches are no longer sufficient. It is against this backdrop that the ISO/IEC 27090 standard has emerged—currently in the FDIS (Final Draft International Standard) stage since July 2025, the stage preceding its official publication. A reference framework designed specifically to address the threats and failures unique to AI systems.

Whether you’re a CISO, an AI solutions provider, or a compliance officer, this article provides the insights you need to understand the benefits of AI and plan for its integration into your security strategy.

Why Traditional Cybersecurity Is No Longer Enough in the Face of AI

Cybersecurity has historically been based on three fundamental pillars: confidentiality, integrity, and availability (the CID triad). These principles have shaped decades of best practices, from access management to business continuity plans.

But AI is fundamentally changing the game. Organizations are rolling out conversational assistants (ChatGPT, Copilot), autonomous agents, RAG (Retrieval-Augmented Generation) architectures, recommendation systems, and machine learning pipelines on a massive scale. These systems are no longer just software: they learn from data, interact with users, and can act autonomously.

This development introduces new attack vectors that traditional cybersecurity does not cover:

  • Manipulating training data to bias a model
  • Exploiting a model's outputs to extract sensitive data
  • Prompt injection attacks to bypass security measures
  • The theft or reconstruction of a proprietary model

Securing AI therefore means applying cybersecurity principles to a new generation of digital systems—using methods and tools tailored to their specific characteristics.

ISO 27090: Overview and Positioning

The ISO/IEC 27090 standard (Cybersecurity — Artificial intelligence — Guidelines for addressing security threats and failures in artificial intelligence systems) has been at the FDIS stage since February 2026—a very advanced stage immediately preceding official publication. It is part of the ISO 27000 family of standards and complements an already established standards ecosystem:

Standard Scope
ISO 27001 Information Security Management (Certifiable)
ISO 42001 Governance and Management of AI Systems (Certifiable)
ISO 42005 AI Impact Assessment
ISO 23894 AI Risk Management
ISO 27005 Security Risk Management
ISO 27090 Threats and safeguards specific to AI systems
NIST AI RMF AI Risk Management Framework (United States, with growing international use)

The distinctive feature of ISO 27090 is clear: it focuses primarily on the technical security mechanisms and threats specific to AI models. It complements ISO 42001 — which addresses AI governance and covers, notably through its Annex A (A.6.2 on data for AI systems, A.6.5 on documentation), aspects of data traceability and the lifecycle — by providing additional technical and security depth, with a more operational focus on processes such as model training, continuous validation, and resistance to attacks.

💡 Note for compliance teams: ISO 42005, which addresses the impact assessment of AI systems, is directly linked to clause 6.1.4 of ISO 42001 and the requirements of the AI Act (Article 27 on the impact assessment on fundamental rights). Its omission from a compliance approach constitutes a significant blind spot.

For organizations certified to ISO 27001, ISO 27090 enables the extension of risk analysis to specific AI components: training data, models, and ML pipelines. For organizations with subsidiaries or clients outside the EU, the NIST AI RMF (AI Risk Management Framework) is an essential complementary standard, widely adopted in North America and increasingly used in Europe to ensure interoperability among standards.

Structure of the Standard: 3 Key Chapters + 2 Appendices

Chapter 5 — Applying Information Security to AI Systems

Chapter 5 lays the groundwork: how to apply traditional information security (CIS) principles to AI systems throughout their lifecycle.

Key recommended practices include

  • Build security into the design from the very beginning (development, training, deployment, and operation)
  • Apply Zero Trust principles: never trust by default, verify every access attempt, and apply the principle of leastprivilege‍
  • Establishing AI governance: inventory of models, risk analysis,traceability‍
  • Securing the supply chain through concepts such asAI BOM (AI Bill of Materials) to trace the origin of models anddata‍
  • Reducing the data attack surface: minimization, anonymization, and limitingretention periods‍
  • Continuously monitor and test models to detect drift, anomalies, andattacks‍
  • Use threat modeling and red teaming to simulatereal-worldattacks‍‍

💡 The autonomous operations of AI agents are challenging traditional identity-based Zero Trust tools. There is a growing trend toward models based on context and intent (intent-based security).

Chapter 6 — Identifying Threats Specific to AI Systems

This chapter outlines the main attacks targeting AI models, their security implications, and the associated detection methods. For each type of attack, the standard describes the nature of the attack, its potential impacts, and the associated mitigation measures:

  • Data poisoning: the injection of malicious data into training data to degrade the model's accuracy or make it vulnerable to future attacks
  • Evasion attack: altering inputs in a way that is imperceptible to humans to produce incorrect outputs
  • Membership inference: creating input-output pairs to identify data stored by the model
  • Model exfiltration: reconstruction of a model that is functionally equivalent to the original
  • Model inversion: using model outputs to reconstruct training data and disclose sensitive information
  • Direct model poisoning: direct manipulation of the model during development or deployment (without using training data)
  • Direct model theft: the direct theft of model parameters from the production or development environment
  • Direct training data leak: unauthorized access to training data
  • Model input/output leak: compromise of the confidentiality of the model's input or output data
  • Prompt injection: malicious instructions in prompts to trigger unintended behavior
  • Output injection attacks: outputs containing XSS (cross-site scripting) attacks

This mapping enables security teams to incorporate AI risks into their existing cybersecurity analyses by linking them directly to CID properties.

Chapter 7 — Mitigation Measures and Best Practices

Chapter 7 describes security controls designed to mitigate the impact of attacks. It emphasizes a fundamental point: the security of an AI system must be viewed as a coherent whole, rather than as a series of independent safeguards. Certain measures may interact with one another or degrade model performance if they are not evaluated holistically.

The 10 main categories of measures covered:

  1. Overview — Evaluate mitigation measures holistically, analyze their interactions, and ensure continuous monitoring duringproduction‍
  2. Conflicting interactions — Systematically assess the combined effects of defense mechanisms prior todeployment‍
  3. ‍Lifecycle continuity — Combining multiple layers of protection (data control, dropout, input/output filtering)
  4. Degradation over time — Continuous monitoring, regular testing, retraining, and validation beforeredeployment‍
  5. Logging and Monitoring — Log entries, exits, users, dates, and template versions to detect suspicious behavior andautomated attacks
  6. Development environment — Secure data, code, settings, documentation, and RAG databases to prevent leaks andtampering‍
  7. Detection of malicious inputs — OOD (Out Of Distribution) detection, statistical analysis,anomaly detection‍
  8. Rate limiting (throttling) — Limit the frequency of requests to prevent scraping, reverse engineering, andadversarial attacks‍
  9. Hiding confidence levels — Limit or round confidence scores to preventmodel reconstruction‍
  10. Model Size Limitation — Reduce complexity to minimize the risk of data retention and data leakage

Provider or Deployer: Different Responsibilities

ISO 27090 distinguishes between two key roles, a distinction consistent with the logic of the European AI Act:

The vendor develops or markets an AI system. It focuses on securing the model and its lifecycle: robustness, resistance to attacks, and the integrity of training data.

The integrator implements this system within a business context. They secure the integration environment, including data access, API exposure, business use cases, and monitoring. They must have a thorough understanding of the security challenges specific to these models in order to validate the systems they integrate and assume operational responsibility for them.

This dual approach is essential for any organization seeking to deploy AI systems in a reliable, AI Act-compliant, and auditable manner.

Areas that are still underrepresented

Despite its ambitious scope, the draft standard leaves several areas that require further exploration:

  • The security of autonomous agents, MCP (Model Context Protocol) servers, and multi-agent architectures—practical guides published byOWASP GenAI in October 2025 already offer recommendations for securing these architectures
  • Cognitive and informational risks: user manipulation, automated disinformation
  • The economic and strategic risks associated with the use of large-scale models

In France, ANSSI has published security recommendations specifically for generative AI systems, providing an operational framework that complements international standards. The France AI Hub also contributes to a comprehensive vision of AI security through its white papers and research summaries.

ISO 27090: A Tool for Building Trust, Not Just Compliance

The security of AI systems now goes beyond the traditional CID framework. It extends to areas such as decision transparency, algorithmic accountability, explainability (XAI—Explainable AI), and the tangible impact on individuals.

For organizations, proactively adopting ISO 27090—in conjunction with ISO 42001 for governance, ISO 42005 for impact assessment, ISO 27001 for information security, and the NIST AI RMF for international contexts—enables the development of a robust, documented, and audited AI security framework. It also sends a strong signal to customers, partners, and regulators: the security of your AI systems is taken seriously, integrated from the design phase, and not added as an afterthought.

In a landscape where risks evolve as quickly as architectural innovations (agents, RAG, multimodal LLMs), this regulatory approach has become an essential prerequisite for any responsible AI deployment.

Would you like to learn more about this topic or assess your level of readiness for the requirements of ISO 27090? Check out our webinars dedicated to AI security in the enterprise.

Join our newsletter
Cybersecurity tips, analyses and news delivered to your inbox every month! 
Learn more about our privacy policies.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
More content

Our latest Blog posts