Article 9Upcoming 2026-08-02

EU AI Act Article 9Risk management system

Article 9 of the EU AI Act requires a documented, lifecycle-long risk management system for high-risk AI systems — not a one-time assessment. It must identify foreseeable risks to health, safety, and fundamental rights, evaluate them under intended use and reasonably foreseeable misuse, adopt targeted mitigation measures, and produce testing evidence sufficient to defend the residual-risk judgement. Enforceable from 2 August 2026.

Source: Regulation (EU) 2024/1689 (EU AI Act), CELEX:32024R1689.

What Article 9 actually says

1. A risk management system shall be established, implemented, documented and maintained in relation to high-risk AI systems. 2. The risk management system shall be understood as a continuous iterative process planned and run throughout the entire lifecycle of a high-risk AI system, requiring regular systematic review and updating. It shall comprise the following steps: (a) the identification and analysis of the known and the reasonably foreseeable risks that the high-risk AI system can pose to health, safety or fundamental rights when the high-risk AI system is used in accordance with its intended purpose; (b) the estimation and evaluation of the risks that may emerge when the high-risk AI system is used in accordance with its intended purpose, and under conditions of reasonably foreseeable misuse; (d) the adoption of appropriate and targeted risk management measures designed to address the risks identified pursuant to point (a). 6. High-risk AI systems shall be tested for the purpose of identifying the most appropriate and targeted risk management measures. Testing shall ensure that high-risk AI systems perform consistently for their intended purpose and that they are in compliance with the requirements set out in this Section.

Paragraphs: 9(1) · 9(2)(a) · 9(2)(b) · 9(2)(d) · 9(6)

Application date

2026-08-02

Status: UPCOMING

Penalty band

Up to €15M or 3% of global annual turnover

Sanction route: Article 99(4)

Article 9 becomes enforceable on 2 August 2026 for high-risk AI systems and requires a documented, lifecycle-long risk management system — not a one-time assessment. Failure to maintain it routes through the Article 16 provider obligations and is sanctionable up to €15M or 3% of global annual turnover under Article 99(4). The operational consequence is that risk management must produce versioned, reviewable artefacts mapped to identified hazards, with testing evidence sufficient to defend the residual-risk judgement.

Practical compliance with RiskForge

RiskForge produces an audit-trailed Risk Management File aligned with Annex IV documentation requirements in roughly 30 minutes. Eight risk dimensions, 50+ guided questions, SHA-256 hash-chained tamper-evident audit log, NIST AI RMF and ISO/IEC 42001 cross-mapping. RiskForge is a screening + documentation artefact, not a substitute for notified-body conformity assessment.

  • 9(1)Generates a versioned risk-management-system file: hazard register, risk owners, review cadence, change history
  • 9(2)(a)Structured hazard identification across health, safety and fundamental-rights dimensions with intended-purpose framing
  • 9(2)(b)Reasonably-foreseeable-misuse scenario library with likelihood × severity scoring and mitigation linkage
  • 9(2)(d)Maps each identified risk to a targeted mitigation control and tracks residual-risk acceptance with sign-off trail
  • 9(6)Test-plan generator tying each hazard to a measurable test, with prior-defined metrics and probabilistic thresholds (Art. 9(8))

Install in 30 seconds

bashpip install riskforge

Frequently asked questions

Direct answers to common questions about Article 9 and how RiskForge addresses it. Regulatory citations reference EUR-Lex CELEX:32024R1689.

What does EU AI Act Article 9 require?
A documented, lifecycle-long risk management system for high-risk AI systems — not a one-time assessment. It must identify foreseeable risks to health, safety, and fundamental rights; estimate and evaluate them under intended use and reasonably foreseeable misuse; adopt targeted mitigation measures; and produce testing evidence sufficient to defend the residual-risk judgement. Source: Regulation (EU) 2024/1689 Article 9(1)–(2)(a)–(b)(d), 9(6).
When does Article 9 become enforceable?
Article 9 obligations for high-risk AI systems become enforceable on 2 August 2026, per Article 113 of the EU AI Act. Source: Regulation (EU) 2024/1689 Article 113.
How long does a complete Risk Management File take with RiskForge?
Approximately 30 minutes for an interactive 8-dimension assessment with 50+ guided questions, depending on the complexity of the system being assessed. The output is a JSON + PDF Risk Management File aligned with Annex IV documentation requirements. This is a screening artefact, not a substitute for notified-body review.
Is RiskForge a notified-body conformity assessment?
No. RiskForge produces documented evidence supporting an Article 9 risk management system. Conformity assessment by a notified body, where required, is a separate process performed by accredited entities. RiskForge output is one input to that process, not a replacement for it.
What scoring methodology does RiskForge use?
A 5×5 likelihood × severity matrix with automatic risk-band classification, applied per identified risk. Annex III pattern matching pre-populates risk items for known high-risk scenarios (credit scoring, hiring, facial recognition, medical diagnosis).
Does RiskForge cross-map to NIST AI RMF and ISO/IEC 42001?
Yes. Each risk-management dimension is cross-mapped to NIST AI RMF GOVERN/MAP/MEASURE/MANAGE categories, ISO/IEC 42001 controls, the Colorado AI Act, and Texas HB 1709 — so a single assessment produces evidence reusable across multiple frameworks.
What is the penalty for Article 9 non-compliance?
Up to €15M or 3% of global annual turnover, whichever is higher, under Article 99(4). The Article 16 provider obligation chain routes Article 9 failures through this penalty band.
Is RiskForge free?
Yes. Apache 2.0 licensed, free for any use including commercial. No telemetry — outbound network calls are blocked at CI level via pytest-socket.
Can I customize the question bank for sector-specific risks?
Yes. The question bank is plug-in based via Python entry points. The core 8 dimensions cover the regulatory baseline; sector-specific additions (medical devices, financial services) are extensible through user-supplied bank YAML files.
How is the audit trail tamper-evident?
Every change is recorded with a SHA-256 hash chained to the previous entry. `riskforge verify` recomputes the chain and exits with code 2 if any link is broken — making tampering or partial deletion CI-detectable.

When the findings land on a governance desk

Tools surface problems. Programmes solve them.

When the residual-risk judgement is the part that gets defended in front of a regulator, the operating model around the file matters as much as the file. AskAjay's Minimum Viable Governance framework runs that programme work end-to-end, including the 30-day risk management sprint that pairs with RiskForge output.

Framework: MVG (Minimum Viable Governance) — at AskAjay.ai, the advisory arm of AI Exponent LLC.

Explore the MVG (Minimum Viable Governance) framework →