The EU Artificial Intelligence Act (AIA) mandates a robust Quality Management System (QMS) for high-risk AI systems under Article 17. For companies, especially in regulated sectors like healthcare, a key question is whether to build upon existing standards like ISO 13485 (for medical devices) or adopt the new ISO 42001 (for AI management systems).
The table below provides a high-level comparison of how these standards align with the core requirements of Article 17 AIA.
Article 17 AIA Requirement | ISO 42001 Coverage | ISO 13485 Coverage | Practical Implication |
---|---|---|---|
(a) Regulatory Compliance Strategy | Partial (Framework exists, but not product-safety prescriptive) | Partial (General regulatory compliance, lacks AI-specific focus) | Both require a strategy, but neither is fully aligned with the AIA's product-safety rigor. |
(b) Design & Design Control | Full (Annex A.6) | Full (Clause 7.3) | Both standards provide strong, established processes for design and development. |
(c) Development, QC & QA | Full (Annex A.6) | Full (Clause 7.3) | Both standards comprehensively cover development and quality assurance. |
(d) Test & Validation Procedures | Full (Annex A.6.2.4) | Full (Clause 7.3.6) | Both require rigorous verification and validation activities. |
(e) Technical Specifications | Partial (Guides selection, but not prescriptive) | Partial (Requires consideration, but not AI-specific) | The AIA demands more prescriptive, product-oriented standards than either provides. |
(f) Data Governance | Full (Annex A.7) | Gap | This is a key differentiator. ISO 42001 has detailed data governance controls; ISO 13485 does not. |
(g) AI Risk Management | Partial (Framework for organizational risk) | Partial (References ISO 14971, not AI-specific risks) | The AIA requires a continuous, product-focused risk process addressing fundamental rights, which is not fully covered by either. |
(h) Post-Market Monitoring | Full (Annex A.6.2.6) | Partial (Framework exists, lacks AI-specific elements) | ISO 42001 better addresses monitoring for performance drift and continuous learning. |
(i) Serious Incident Reporting | Full (Clause 10.2) | Full (Clause 8.2.3) | Both have robust incident reporting and corrective action processes. |
(j) Communication with Authorities | Full (Clause 7.4) | Full (Clause 7.2.3) | Both require clear communication protocols. |
(k) Record-Keeping | Full (Clause 7.5) | Full (Clause 4.2.5) | Both have comprehensive documentation and record-keeping requirements. |
(l) Resource Management | Full (Clause 7.1, Annex A.4) | Full (Clause 6) | Both cover personnel, infrastructure, and tooling. |
(m) Accountability Framework | Full (Clause 5.3, Annex A.3) | Partial (Roles defined, but not AI-specific) | ISO 42001 excels in defining AI-specific roles and responsibilities. |
Understanding Article 17's QMS Requirements
Article 17 requires providers of high-risk AI systems to implement a documented QMS. This is a mandatory framework of written policies and procedures that ensures ongoing compliance. The system must cover everything from strategic compliance and technical development to data governance and post-market vigilance.
A key provision allows companies already under sectoral rules (like for medical devices) to integrate these AIA requirements into their existing QMS, preventing unnecessary duplication.
The Core Challenge: A Standards Gap
As the table illustrates, neither ISO 42001 nor ISO 13485 alone guarantees full compliance with the AIA.
ISO 42001 provides an excellent AI management framework with crucial controls for data governance and AI roles. However, as noted by the European Commission, its focus is more on organisational guidance than the prescriptive, product-safety requirements of the AIA. It is flexible, allowing companies to balance risks with business opportunities, whereas the AIA demands a more rigorous, product-oriented approach.
ISO 13485, as the TÜV AI Lab whitepaper confirms, offers a structurally compatible foundation. It fully covers general quality processes like design control and validation. However, its "technology-agnostic" nature creates significant gaps in AI-specific areas, most notably in data governance (f) and AI risk management (g).
The European Commission has stated that future harmonised standards must be more prescriptive, similar to product safety standards like ISO 14971 for medical devices. This fundamental misalignment has contributed to delays in official guidance. Read FAQ from the Joint Artificial Intelligence Board and Medical Device Coordination Group.
A Practical Blueprint: Learning from Medical Devices
While horizontal standards are delayed, sector-specific guidance shows the way forward. The recent joint guidance for Medical Device AI (MDAI) demonstrates how to merge the AIA's QMS with existing regulations.
It advises manufacturers to weave AI-specific requirements into their current quality systems. This means enhancing data governance with a focus on bias, documenting how AI outputs are made understandable to clinicians, and expanding risk management to cover fundamental rights.
This approach proves the AIA QMS is not a standalone system but an essential layer that adds rigor to data and model governance within a proven product development framework.
The Way Forward: Actionable Steps for Now
With formal standards delayed, companies must be proactive. Waiting is not an option. Here is what to do:
Start with a Gap Analysis: Use the direct text of Article 17 as your checklist. Compare your current practices against its 13 required aspects to identify your biggest gaps.
Build on What You Have: If you have an existing QMS (e.g., based on ISO 13485), integrate the new AI requirements into it. Use ISO 42001 as a valuable resource to fill the critical gaps in data governance, AI risk management, and accountability.
Adopt a Product Safety Mindset: Treat your AI system as a safety-critical product. Scrutinise your development lifecycle and risk management through the lens of product liability and user safety, going beyond the organisational focus of ISO 42001.
Document Everything: Implement robust systems for tracking data lineage, model training, and design decisions. This creates a vital audit trail for future assessments.
Conclusion
The uncertainty around the AI Act's QMS is a significant challenge. However, the path is clear: use ISO 13485 as your quality backbone and augment it with the AI-specific controls from ISO 42001. Companies that see Article 17 as a blueprint for building trustworthy AI will gain a competitive edge. By starting now and building a rigorous, transparent QMS, organisations can turn regulatory compliance into a mark of quality and reliability.