What the regulation requires
1. The following AI practices shall be prohibited: (a) the placing on the market, the putting into service or the use of an AI system that deploys subliminal techniques beyond a person's consciousness or purposefully manipulative or deceptive techniques, with the objective, or the effect of materially distorting the behaviour of a person or a group of persons by appreciably impairing their ability to make an informed decision, thereby causing them to take a decision that they would not have otherwise taken in a manner that causes or is reasonably likely to cause that person, another person or group of persons significant harm; (b) the placing on the market, the putting into service or the use of an AI system that exploits any of the vulnerabilities of a natural person or a specific group of persons due to their age, disability or a specific social or economic situation, with the objective, or the effect, of materially distorting the behaviour of that person or a person belonging to that group in a manner that causes or is reasonably likely to cause that person or another person significant harm; (f) the placing on the market, the putting into service for this specific purpose, or the use of AI systems to infer emotions of a natural person in the areas of workplace and education institutions, except where the use of the AI system is intended to be put in place or into the market for medical or safety reasons; (g) the placing on the market, the putting into service for this specific purpose, or the use of AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage.
What you face if you don't comply
Article 5 has been enforceable since 2 February 2025 — placing or using a prohibited-practice AI system on the EU market today exposes the provider, importer, distributor, or deployer to the highest tier of fines in the regulation: up to €35M or 7% of global annual turnover under Article 99(3). The eight prohibitions are absolute — no consent, opt-out, or post-hoc mitigation rescues a prohibited practice once the system meets the prohibition's criteria. Pre-deployment screening before code is shipped is the only defensible posture.
How LitmusAI addresses this
- ¶ 5(1)(a)Detects subliminal-manipulation indicators in system descriptions and outputs; flags when a system materially distorts behaviour against the user's interest
- ¶ 5(1)(b)Pattern-matches vulnerable-population markers (minors, persons with disabilities, persons in vulnerable economic situations) and flags exploitation patterns
- ¶ 5(1)(f)Emits a Red verdict on any system combining emotion inference with workplace or education deployment context (without the medical/safety carve-out)
- ¶ 5(1)(g)Detects untargeted-facial-image-scraping indicators (web crawl + facial recognition + database creation) — the exact pattern Article 5(1)(g) prohibits
Source: eur-lex.europa.eu/…/CELEX:32024R1689 · Retrieved