Open Source AI Governance Tools

A growing portfolio of open source tools designed to help teams build, evaluate, and govern AI systems responsibly. Every tool maps to specific obligations under the EU AI Act, NIST AI RMF, ISO/IEC 42001, and emerging US state laws — so you know exactly what compliance gap it addresses, in which jurisdiction.

Our open source tools are free forever. Sigil, the runtime governance platform, is currently in development — request early access.

Open Source Tools

Free. Apache 2.0. No account required.

Production-ready libraries and frameworks for building compliant AI systems. Install via pip or Docker.

Article 53

License Compliance Checker

Flagship

Scans AI models, software packages, and agentic pipelines for license compliance across 8 ecosystems. Detects HuggingFace model references in code, GGUF/ONNX files, and generates EU AI Act Article 53 audit evidence with an honest dataset risk registry.

EU AI Act Relevance

Article 53GPAI Compliance: Generates audit evidence supporting EU AI Act Article 53 documentation obligations — evaluates model card completeness, license compliance, and training data risk for AI components in your stack.

bashpip install license-compliance-checker
Article 9

RiskForge

Flagship

Guided 8-dimension risk assessment CLI with 50+ questions drawn from EU AI Act Article 9 requirements, Annex III pattern matching, and SHA-256 hash-chained audit trail. Produces a legally-defensible Risk Management File (JSON + PDF) that satisfies Annex IV documentation requirements, in approximately 30 minutes instead of weeks of consulting work.

EU AI Act Relevance

Article 9Risk Management: Produces audit-ready Article 9 risk management files for high-risk AI systems. Covers all 8 EU AI Act risk dimensions — health & safety, fundamental rights, discrimination, privacy, transparency, human oversight, robustness, and data governance — with cross-maps to NIST AI RMF and ISO/IEC 42001.

bashpip install riskforge
Articles 11 + 18

Agentic Document Analyser

Infrastructure

Converts unstructured compliance documents — risk assessments, model cards, contracts, audit logs — into structured JSON using Vision-Language Models. Acts as the evidence processing layer for the AiExponent compliance toolchain. Feeds Article 11 technical documentation and Article 18 log preservation workflows.

EU AI Act Relevance

Articles 11 + 18Evidence Processing: Structures unstructured compliance documents into machine-readable JSON for Article 11 technical documentation packages and Article 18 log preservation workflows. Does not implement Article 9 risk management — use RiskForge for that.

bashdocker compose up
Article 15

RAG Benchmarking

Flagship

Plug in any RAG system — LangChain, LlamaIndex, or custom — and benchmark it against classic and agentic-era metrics. Faithfulness, answer relevancy, retrieval precision, and four agentic metrics for multi-step agents. Measured faithfulness of 0.958 on the 50-sample golden dataset.

EU AI Act Relevance

Article 15Accuracy Requirements: Provides systematic accuracy testing and documentation for high-risk AI systems under Article 15.

bashpip install rag-benchmarking

Global Regulatory Coverage

Our tools map to the major AI regulations worldwide. Compliance with one framework often satisfies obligations in others.

Primary

EU AI Act

The world's first comprehensive AI regulation. Phased enforcement 2024–2027.

Articles 4, 5, 9, 10, 13, 15, 53, 72

Fines up to €35M or 7% global revenue

US Federal

NIST AI RMF

Mandatory for US federal contractors. De facto standard for US enterprise AI.

Govern · Map · Measure · Manage

EO 14110 · OMB M-24-10

International

ISO/IEC 42001:2023

AI Management System standard. Certification increasingly required for enterprise procurement.

39 Annex A controls

Maps to EU AI Act Annex C

UK

UK AI Principles

Principles-based approach via FCA, ICO, and sector regulators. Legislation expected 2026–2027.

Safety · Transparency · Fairness · Accountability

FCA · ICO · CMA sector enforcement

US State

Colorado AI Act

Active since February 2026. Annual algorithmic impact assessments required for high-risk AI.

SB 24-205 · Impact assessments

$20K/violation · Private right of action

Canada

Canada AIDA

Bill C-27 modelled on EU AI Act. Expected passage mid-2026 with 2-year implementation.

High-impact AI risk assessments

Up to $25M penalties

In Development · Early Access Open

Sigil — Runtime Governance Platform

Commercial AI agent governance platform in active development. Real-time policy enforcement, tamper-evident audit logs, and compliance reporting across EU AI Act Articles 14/17 (human oversight + quality management), NIST AI RMF, and ISO/IEC 42001. Early access available on request.

Articles 14, 17Runtime GovernanceNIST AI RMFISO/IEC 42001

What you'll get at launch

Real-time policy enforcement

Block or amend AI agent actions at runtime before they reach users or downstream systems.

Tamper-evident audit log

SHA-256 hash-chained, append-only. Verifiable with a single command. Article 12/17 ready.

Article 14 human oversight

Configurable human-in-the-loop gates on high-impact actions with structured reviewer evidence.

Cross-framework reporting

One evidence layer, multiple compliance outputs — EU AI Act, NIST AI RMF, ISO/IEC 42001.

Pricing. We're finalising pricing with design-partner customers. Early-access participants help shape the tiers and get founding-customer terms.

Want Sigil before launch?

We're working with a small number of design-partner teams before general availability. If you're building high-risk AI systems and want runtime governance aligned to EU AI Act Articles 14 & 17, get in touch.

Request Early Access