The EU AI Act Readiness Checklist: 10 Steps to Compliance
- Mar 3
- 3 min read

EU AI Act readiness checklist
As the EU AI Act begins its staged rollout, waiting and seeing is no longer a viable business strategy. For UK firms selling into Europe or using AI internally, understanding how the EU AI Act applies extraterritorially is now a commercial necessity.
This checklist, informed by the ISO/IEC 42001 framework, provides a high-level roadmap for AI governance and enables organisations to reach an appropriate state of readiness ahead of the August 2026 deadline.
EU AI Act readiness means ensuring your AI systems are classified, governed, monitored, and documented in line with regulatory risk tiers. For most UK businesses, aligning with ISO 42001 as the global standard for AI risk management provides the fastest and most defensible route to compliance.
Here is the EU AI Act readiness checklist:
1. AI Inventory & Classification
Identify every AI system in your stack and categorise them according to the Act’s risk tiers: Unacceptable (Prohibited), High Risk (Regulated), Limited Risk (Transparency), or Minimal Risk.
Auditor’s Tip: Most enterprise AI used for HR, credit scoring, or infrastructure falls into the High Risk category.
2. Establish an AI Management System (AIMS)
Align with ISO/IEC 42001, the global standard for Artificial Intelligence Management Systems, to formally govern your AI lifecycle rather than relying on ad hoc spreadsheets or informal controls.
3. Data Governance & Bias Mitigation
Audit your training data to ensure datasets are relevant, representative, and free from material errors. Documented processes must exist to detect, measure, and correct structural bias as part of your AI governance strategy.
4. Technical Documentation
Create a technical file for every High Risk system, including architecture, design specifications, and justification for model choices. This is typically the first artefact regulators request during an EU AI Act assessment.
5. Automated Logging & Traceability
Enable event logging so High Risk AI systems automatically record their operation across the full lifecycle, supporting traceability, accountability, and detection of model drift.
6. The Human in the Loop Protocol
Define clear human oversight mechanisms, including who has authority to pause, override, or shut down AI-driven actions in real time when necessary.
7. Transparency & User Information
Ensure individuals are informed when interacting with AI systems or AI-generated content. Transparency obligations under the Act apply to chatbots, automated decision tools, and synthetic media.
8. Accuracy, Robustness & Cybersecurity
Cybersecurity obligations under the EU AI Act increasingly overlap with NIS2 and ISO 27001 requirements, particularly for high-risk AI systems exposed to adversarial attacks such as data poisoning or prompt injection.
9. EU Database Registration
Prepare to register High Risk AI systems in the EU’s central database before market deployment. UK organisations without an EU presence must appoint an authorised EU representative.
10. Post Market Monitoring (PMM)
Implement a documented feedback loop to monitor AI performance after deployment and report serious incidents to regulators as required.
How an ISO 42001 Lead Auditor Can Help
Compliance with the EU AI Act can feel like a moving target, but working with an ISO 42001 Lead Auditor helps organisations translate regulatory requirements into a structured, auditable governance framework.
As a Lead Auditor, I help firms conduct gap analyses against the Act, build the AIMS required to automate compliance evidence, and perform independent audits that provide assurance to boards, regulators, and enterprise customers.
Don’t let the EU AI Act become a barrier to your innovation.
Sampson ISO Audit & Consult Ltd



Comments