Overview
The EU AI Act policy ensures AI interactions comply with the European Unionβs comprehensive AI regulation by identifying high-risk AI use cases and ensuring appropriate transparency and risk mitigation measures.What It Detects
High-Risk AI Systems
AI systems used in critical infrastructure, education, employment, healthcare
Prohibited AI Practices
Subliminal techniques, social scoring, real-time biometric identification
Transparency Requirements
AI systems that interact with humans or generate content
Foundation Model Obligations
Large language models and other foundation AI systems
Risk Categories
Prohibited AI (Risk Level: Critical)
- Subliminal techniques that cause harm
- Social scoring systems
- Real-time remote biometric identification in public spaces
- AI systems that exploit vulnerabilities of specific groups
High-Risk AI (Risk Level: High)
- Biometric identification and categorization
- Critical infrastructure management
- Education and vocational training
- Employment and worker management
- Access to essential services
- Law enforcement applications
- Migration, asylum and border control
- Administration of justice and democratic processes
Limited Risk AI (Risk Level: Medium)
- AI systems intended to interact with natural persons
- Emotion recognition systems
- Biometric categorization systems
- AI systems that generate or manipulate content
Minimal Risk AI (Risk Level: Low)
- AI-enabled video games
- Spam filters
- AI systems not covered by other categories