EU AI Act Risk Classification: What You Need to Know
The EU AI Act introduces a tiered risk framework that will shape how AI systems are built, deployed, and audited across Europe. Here is what practitioners need to understand.

The Regulation Is Here
The EU AI Act entered into force on 1 August 2024. Its provisions are phasing in across 2025 and 2026, with high-risk AI system requirements applying from August 2026. If your organization develops, deploys, or uses AI within the EU — or provides AI systems to EU users — you need to understand its risk classification framework now.
This article focuses on the classification system itself, which is the foundation of everything else in the Act.
The Four Tiers
The Act organizes AI systems into four risk tiers. Each tier triggers different obligations.
Unacceptable Risk — Prohibited
These systems are banned outright. No exemption exists for commercial purposes.
Examples include:
- Social scoring systems used by governments
- Real-time biometric surveillance in public spaces (with narrow law-enforcement exceptions)
- AI that exploits psychological vulnerabilities to manipulate behavior
- AI that infers sensitive personal attributes (political views, ethnicity, sexual orientation) from biometric data
If your system falls in this category, stop development and seek legal counsel immediately.
High Risk — Regulated
This is the category that will affect most enterprise AI deployments. High-risk systems require mandatory conformity assessments, technical documentation, logging, human oversight mechanisms, and registration in the EU database.
High-risk AI covers two sub-categories:
Annex I — Systems in safety-critical products:
- Medical devices
- Machinery and industrial equipment
- Aviation and automotive systems
Annex III — Standalone high-risk systems:
- Biometric identification and categorization
- Critical infrastructure management (energy, water, transport)
- Educational and vocational assessment
- Employment and HR decisions (CV screening, performance monitoring)
- Access to essential public services (credit scoring, benefits assessment)
- Law enforcement (risk assessment, polygraph-like tools)
- Migration and border control
- Administration of justice
The employment and HR category is particularly broad. If you use AI to screen CVs, rank candidates, or monitor employee productivity, you are almost certainly operating a high-risk system.
Limited Risk — Transparency Obligations
These systems have lower obligations but must be transparent to users.
The primary requirement: users must be informed when they are interacting with an AI. This covers:
- Chatbots and conversational agents
- AI-generated content
- Deepfakes and synthetic media
The practical implication: "This response was generated by AI" disclosures are now legally required, not just best practice.
Minimal Risk — No Mandatory Obligations
AI-powered spam filters, recommendation engines, and similar low-impact systems fall here. The Act encourages voluntary adherence to codes of conduct but does not mandate it.
General-Purpose AI Models (GPAI)
The Act also addresses GPAI models — foundation models like GPT, Claude, and Llama. Providers must:
- Maintain technical documentation
- Comply with EU copyright law during training
- Publish summaries of training data
GPAI models designated as systemic risk (currently defined as models trained with more than 10^25 FLOPs) face additional requirements including adversarial testing, incident reporting to the European AI Office, and cybersecurity obligations.
This tier directly affects major LLM providers and is already prompting significant compliance activity.
Classification Is Not Always Obvious
Several factors complicate classification in practice.
Intended vs. actual use: The Act classifies systems based on their intended purpose. But if an operator deploys a general-purpose system in a high-risk context (using a generic LLM for medical triage, for example), the high-risk obligations apply.
Downstream modifications: Importers and distributors who make substantial modifications to an AI system become its provider for regulatory purposes, inheriting the full compliance burden.
Embedded components: A high-risk AI component embedded in a larger product can trigger high-risk classification for the entire product.
Evolving Annex III: The Commission can expand Annex III via delegated acts. Classification that is correct today may change.
What High-Risk Compliance Requires
For high-risk systems, organizations must implement:
- Risk management system: Continuous identification, analysis, and mitigation of risks throughout the AI lifecycle.
- Data governance: Documented processes for training data quality, bias assessment, and data management.
- Technical documentation: Detailed records of system design, capabilities, limitations, and testing.
- Logging: Automatic event logging with sufficient granularity to enable post-hoc auditability.
- Transparency: Information to deployers about system capabilities, limitations, and appropriate use.
- Human oversight: Mechanisms allowing humans to monitor, intervene, and override the system.
- Accuracy, robustness, and cybersecurity: Performance metrics and security controls appropriate to the risk level.
- Conformity assessment: Either self-assessment or third-party audit, depending on the category.
- EU registration: Entry in the public EUDAMED-equivalent AI database before deployment.
Practical Steps for Security Teams
Security professionals have a specific role in AI Act compliance:
- Threat modeling: Identify attack vectors against the AI system itself (adversarial inputs, model extraction, data poisoning).
- Penetration testing: Conduct structured security assessments under Article 9 (risk management) and the cybersecurity requirements.
- Incident response planning: The Act requires incident reporting. You need a process.
- Supply chain assessment: If you use third-party AI components, assess their classification and compliance status.
- Audit readiness: Regulators and conformity assessment bodies will expect documented evidence. Build your evidence trail now.
Penalties
Non-compliance carries significant financial penalties:
- Prohibited AI systems: up to €35 million or 7% of global annual turnover
- High-risk obligations violations: up to €15 million or 3% of turnover
- Providing incorrect information to authorities: up to €7.5 million or 1.5% of turnover
The higher of the two figures applies. For multinationals, these are material numbers.
Conclusion
The EU AI Act's risk classification framework is the lens through which everything else in the regulation flows. Getting classification right — and documenting the reasoning — is the essential first step for any compliance program.
The timeline is tighter than most organizations realize. If you are operating or planning to deploy high-risk AI systems, the work should have started yesterday.
Black Unicorn Security offers EU AI Act compliance assessments, gap analyses, and technical implementation support. Our team combines cybersecurity expertise with deep knowledge of the regulatory framework.