FAQ Series

EU AI Act FAQ - Part 3

EU AI Act FAQ

What is required for the risk management system of high-risk AI systems?

A risk management system must be established, implemented, documented, and maintained. It is a continuous iterative process that runs throughout the entire lifecycle of the system and requires regular systematic updating.

What are the steps involved in the risk management process?

  • Identification and analysis of foreseeables risks.
  • Estimation of risks arising from intended use and reasonably foreseeable misuse.
  • Evaluation of risks based on post-market monitoring data.
  • Adoption of suitable management measures.

What are the data governance requirements for high-risk AI?

High-risk AI systems must be developed based on training, validation, and testing data sets that meet strict quality criteria. They should be relevant, representative, error-free, and complete. Providers must also implement practices covering data collection, preparation, and bias examination.

What must be included in the technical documentation?

According to Annex IV, documentation must include a general description of the system, interaction with hardware/software, data requirements, human oversight measures, and a copy of the EU declaration of conformity.

What about children's safety?

Specific consideration must be given to whether the AI system is likely to be accessed by or have an impact on children when implementing the risk management system.