name: eu-ai-act-compliance description: Classify AI system risk under EU AI Act (Reg 2024/1689), generate Article 50 disclosures, and design conformity audit trails. version: "1.0.0" last-updated: "2026-04-17" model_tested: "claude-sonnet-4-6" category: compliance platforms: [claude-code, codex, gemini-cli, cursor, copilot, windsurf, cline] language: en geo_relevance: [eu, fr] priority: critical dependencies: mcp: [] skills: [] apis: [] data: [ai-act-risk-matrix.md] update_sources:
- url: "https://eur-lex.europa.eu/eli/reg/2024/1689/oj" check_frequency: "quarterly" last_checked: "2026-04-17"
- url: "https://artificialintelligenceact.eu" check_frequency: "monthly" last_checked: "2026-04-17" license: MIT
EU AI Act Compliance
DISCLAIMER: This skill provides guidance only. It does not constitute legal advice. Always verify with qualified legal professionals. The authors assume no liability for decisions made based on this skill's output.
When to Use
Use this skill when:
- Building or deploying an AI system in the EU market
- Assessing whether your system falls under AI Act obligations
- Preparing disclosure notices for users interacting with AI
- Designing audit trail logging for AI systems
- Preparing documentation for EU AI Database registration
Step 1: Risk Classification
Classify the AI system using the four-tier framework from Regulation (EU) 2024/1689:
Prohibited (Article 5) — CANNOT be deployed
- Social scoring by public authorities
- Real-time remote biometric identification in public spaces (with exceptions)
- Emotion recognition in workplace/education
- Predictive policing based solely on profiling
- Untargeted facial image scraping
High-Risk (Annex III) — Requires full conformity assessment
- Biometric identification and categorization
- Critical infrastructure management (water, gas, electricity, transport)
- Education and vocational training (admissions, assessment)
- Employment (recruitment, task allocation, termination)
- Essential services access (credit scoring, emergency services)
- Law enforcement (evidence evaluation, risk assessment)
- Migration and border control
- Justice and democratic processes
Limited Risk (Article 50) — Transparency obligations
- Chatbots and conversational AI (must disclose AI nature)
- Emotion recognition systems (must inform subjects)
- Deep fakes and synthetic content (must label)
- AI-generated text published to inform public (must label)
Minimal Risk — No specific obligations
- Spam filters, AI-enabled video games, inventory management
Action: Determine which tier applies. If High-Risk, proceed to Step 2. If Limited Risk, proceed to Step 3. If Prohibited, stop deployment.
Step 2: High-Risk Compliance (Annex III Systems)
Required documentation and measures:
-
Risk Management System (Article 9)
- Identify and analyze known and foreseeable risks
- Estimate and evaluate risks from intended use and misuse
- Adopt risk mitigation measures
- Test to ensure residual risk is acceptable
-
Data Governance (Article 10)
- Training data must be relevant, representative, and free of errors
- Document data collection, preparation, and labeling practices
- Address potential biases
-
Technical Documentation (Article 11)
- System description, intended purpose, development process
- Monitoring, functioning, and control measures
- Detailed description of AI model (architecture, training, evaluation)
-
Record-Keeping (Article 12)
- Automatic logging of events during system operation
- Traceability of decisions
- Minimum retention: duration appropriate to intended purpose
-
Human Oversight (Article 14)
- Design for effective oversight by natural persons
- Ability to override or reverse AI decisions
- "Stop" button or equivalent procedure
-
Accuracy, Robustness, Cybersecurity (Article 15)
- Achieve appropriate levels of accuracy for intended purpose
- Resilience against errors, faults, and adversarial attacks
- Cybersecurity measures proportionate to risks
Step 3: Article 50 Transparency Obligations
All AI systems interacting with natural persons must:
Chatbot/Agent Disclosure
This service uses artificial intelligence. You are interacting with an
AI assistant, not a human. Responses are generated automatically.
Synthetic Content Marking
This content was created with the assistance of artificial intelligence.
Voice Synthesis Disclosure
This voice is generated by artificial intelligence.
Deep Fake / Synthetic Media
⚠️ This audio/video content was generated by AI (EU AI Act Art. 50).
Implementation: Add disclosure to:
- UI entry points (before first interaction)
- Generated content metadata (ID3 tags, EXIF, document properties)
- API responses (header or field)
Step 4: Audit Trail Design
Design logging to satisfy Article 12 requirements:
{
"timestamp": "ISO-8601",
"session_id": "uuid",
"action": "ai_inference|user_interaction|decision|override",
"model_id": "model-name-version",
"input_hash": "sha256-of-input",
"output_summary": "brief-description",
"risk_tier": "high|limited|minimal",
"human_oversight": true,
"user_consent_recorded": true,
"data_categories": ["text", "personal_data", "biometric"],
"retention_days": 180
}
Retention: Minimum 180 days for high-risk systems. Align with GDPR data minimization.
Step 5: EU AI Database Registration
High-risk AI systems must be registered in the EU database (Article 71) before being placed on the market. Prepare:
- Provider identification and contact details
- System description and intended purpose
- Conformity assessment status
- Member States where the system is available
Key Deadlines
- 2 February 2025: Prohibited practices effective
- 2 August 2025: GPAI model obligations effective
- 2 August 2026: High-risk system rules (Annex III) effective
- 2 August 2027: Full application for all remaining provisions
References
See references/ai-act-risk-matrix.md for the complete classification matrix.
Official source: https://eur-lex.europa.eu/eli/reg/2024/1689/oj Navigation: https://artificialintelligenceact.eu