EU AI Act Article 5 — Prohibited AI practices
Article 5 of the EU AI Act prohibits eight categories of AI practice — subliminal manipulation, exploitation of vulnerabilities, social scoring of natural persons based on social behaviour or personal characteristics, real-time remote biometric identification in public spaces (with narrow exceptions), biometric categorisation inferring sensitive characteristics, emotion inference at work or in education, untargeted facial-image scraping, and individual criminal-risk profiling. The prohibitions are absolute and have been enforceable since 2 February 2025.
Source: Regulation (EU) 2024/1689 (EU AI Act), CELEX:32024R1689.
What Article 5 actually says
1. The following AI practices shall be prohibited: (a) the placing on the market, the putting into service or the use of an AI system that deploys subliminal techniques beyond a person's consciousness or purposefully manipulative or deceptive techniques, with the objective, or the effect of materially distorting the behaviour of a person or a group of persons by appreciably impairing their ability to make an informed decision, thereby causing them to take a decision that they would not have otherwise taken in a manner that causes or is reasonably likely to cause that person, another person or group of persons significant harm; (b) the placing on the market, the putting into service or the use of an AI system that exploits any of the vulnerabilities of a natural person or a specific group of persons due to their age, disability or a specific social or economic situation, with the objective, or the effect, of materially distorting the behaviour of that person or a person belonging to that group in a manner that causes or is reasonably likely to cause that person or another person significant harm; (f) the placing on the market, the putting into service for this specific purpose, or the use of AI systems to infer emotions of a natural person in the areas of workplace and education institutions, except where the use of the AI system is intended to be put in place or into the market for medical or safety reasons; (g) the placing on the market, the putting into service for this specific purpose, or the use of AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage.
Paragraphs: 5(1)(a) · 5(1)(b) · 5(1)(f) · 5(1)(g)
Application date
2025-02-02
Status: ENFORCED
Penalty band
Up to €35M or 7% of global annual turnover, whichever is higher
Sanction route: Article 99(3)
Article 5 has been enforceable since 2 February 2025 — placing or using a prohibited-practice AI system on the EU market today exposes the provider, importer, distributor, or deployer to the highest tier of fines in the regulation: up to €35M or 7% of global annual turnover under Article 99(3). The eight prohibitions are absolute — no consent, opt-out, or post-hoc mitigation rescues a prohibited practice once the system meets the prohibition's criteria. Pre-deployment screening before code is shipped is the only defensible posture.
Practical compliance with LitmusAI
LitmusAI screens an AI system description against all eight Article 5 prohibitions and returns Red / Amber / Clear verdicts with primary-source citations. The reference ruleset shipped with v1.0 is UNREVIEWED (no external EU AI Act lawyer review yet); a Bring-Your-Own signed-ruleset path is available for teams that need lawyer-reviewed output today.
- 5(1)(a)Detects subliminal-manipulation indicators in system descriptions and outputs; flags when a system materially distorts behaviour against the user's interest
- 5(1)(b)Pattern-matches vulnerable-population markers (minors, persons with disabilities, persons in vulnerable economic situations) and flags exploitation patterns
- 5(1)(f)Emits a Red verdict on any system combining emotion inference with workplace or education deployment context (without the medical/safety carve-out)
- 5(1)(g)Detects untargeted-facial-image-scraping indicators (web crawl + facial recognition + database creation) — the exact pattern Article 5(1)(g) prohibits
Frequently asked questions
Direct answers to common questions about Article 5 and how LitmusAI addresses it. Regulatory citations reference EUR-Lex CELEX:32024R1689.
- What does EU AI Act Article 5 prohibit?
- Eight categories of AI practice are absolutely prohibited: subliminal techniques materially distorting behaviour (5(1)(a)), exploitation of vulnerabilities (5(1)(b)), social scoring of natural persons based on social behaviour or personal characteristics (5(1)(c) — note: the final regulation as adopted dropped the "by public authorities" limitation that appeared in the 2021 Commission proposal; the prohibition applies to any actor), real-time remote biometric identification in public spaces for law enforcement subject to narrow exceptions (5(1)(d)), biometric categorisation inferring sensitive characteristics (5(1)(e)), emotion inference in workplaces and education (5(1)(f)), untargeted scraping for facial recognition databases (5(1)(g)), and individual criminal-risk assessment based on profiling (5(1)(h)). Source: Regulation (EU) 2024/1689 Article 5.
- When did Article 5 become enforceable?
- Article 5 has been enforceable since 2 February 2025, per Article 113 of the EU AI Act. Source: Regulation (EU) 2024/1689 Article 113.
- Is LitmusAI a substitute for legal review?
- No. LitmusAI produces a screening verdict — Red, Amber, or Clear with confidence levels — not a legal opinion. Final determination of whether a system falls within an Article 5 prohibition requires qualified legal counsel.
- What is the UNREVIEWED reference ruleset disclaimer?
- The reference ruleset shipped with LitmusAI v1.0 was internally panel-authored and has not yet been reviewed by an external EU AI Act lawyer. This is surfaced verbatim in every report header and CLI output. Use the BYO-ruleset path if you need lawyer-signed output today; full external review is targeted for v1.1.
- Can I use a lawyer-signed Bring-Your-Own ruleset?
- Yes. `litmus use-ruleset your-firm-ruleset.json` switches the active ruleset; subsequent reports show "(SIGNED by: …)" in the header. Cryptographic signature verification of BYO rulesets is structural in v1.0 and lands fully in v1.1.
- Does LitmusAI cover all 8 Article 5 prohibitions?
- Yes. The 22-rule reference ruleset covers all 8 sub-points (5(1)(a) through 5(1)(h)), with conservative-by-default verdict logic — preferring Amber over Clear on ambiguity. The trade-off is more false-positives, never false-negatives on Red.
- What is the penalty for Article 5 violations?
- Up to €35M or 7% of global annual turnover, whichever is higher, under Article 99(3) — the highest tier of fines in the EU AI Act. The eight prohibitions are absolute: no consent, opt-out, or post-hoc mitigation rescues a prohibited practice once the system meets the prohibition criteria.
- Does LitmusAI make any network calls during screening?
- No. Default-mode screening is fully offline — outbound network calls are blocked at CI level via pytest-socket. An optional `--enhanced` mode uses an LLM judge for ambiguous cases (configurable, requires API key). The default behaviour ships zero-network for compliance teams that need it.
- Is LitmusAI free?
- Yes. Apache 2.0 licensed. No telemetry, no remote calls in default mode, no enterprise tier. The PyPI distribution is `litmus-screener`.
- Why is the PyPI package called litmus-screener instead of litmusai?
- The PyPI name `litmusai` was unavailable due to PyPI name-similarity rules — an unrelated package called `litmus-ai` already exists on PyPI. The brand is "LitmusAI"; the distribution name is `litmus-screener`. Both names resolve to this same tool through the schema.org `alternateName` declared on the docs page.
When the findings land on a governance desk
Tools surface problems. Programmes solve them.
When a screening verdict turns up Amber or Red, programme-level remediation — board narrative, regulator dialogue, design-of-record changes — is the work. AskAjay covers that surface; the tool surfaces the gap.
Framework: Authenticity Gate — at AskAjay.ai, the advisory arm of AI Exponent LLC.
Explore the Authenticity Gate framework →Other articles in the EU AI Act