EU AI Act FAQ - Part 3

EU AI Act FAQ - Part 3

EU AI Act FAQ - Part 3

EU AI Act FAQ - Part 3

Here is a link to Part-1 and Part-2 for reference.


What is required in relation to the risk management system for high-risk AI systems under the EU AI Act?

A risk management system must be established, implemented, documented, and maintained for high-risk AI systems. It is a continuous iterative process that runs throughout the entire lifecycle of the high-risk AI system and requires regular systematic updating.

What are the steps involved in the risk management system for high-risk AI systems?

The risk management system includes the following steps:

  • Identification and analysis of known and foreseeable risks associated with each high-risk AI system.

  • Estimation and evaluation of risks that may arise when the high-risk AI system is used according to its intended purpose and reasonably foreseeable misuse.

  • Evaluation of other potentially arising risks based on the analysis of data gathered from the post-market monitoring system.

  • Adoption of suitable risk management measures.

The IEEE CertifAIEd certification can help customers comply with legislation such as EU AI Act, read more about the program here.

What should be considered when implementing risk management measures for high-risk AI systems?

The risk management measures should take into account the effects and possible interactions resulting from the combined application of the requirements in the relevant chapter. They should also consider the generally acknowledged state of the art, including relevant harmonised standards or common specifications.

Is there a specific consideration for high-risk AI systems accessed by or impacting children?

Yes, specific consideration should be given to whether the high-risk AI system is likely to be accessed by or have an impact on children when implementing the risk management system.

What are the requirements for data and data governance in high-risk AI systems under the EU AI Act?

The following requirements apply to data and data governance in high-risk AI systems:

  • High-risk AI systems using data training techniques should be developed based on training, validation, and testing data sets that meet quality criteria.

  • Data governance and management practices should be implemented for training, validation, and testing data sets. These practices should cover design choices, data collection, data preparation operations (such as annotation, cleaning, and enrichment), formulation of assumptions about the data, assessment of data availability and suitability, examination of possible biases, and addressing data gaps or shortcomings.

  • Training, validation, and testing data sets should be relevant, representative, error-free, complete, and statistically appropriate. They should consider the characteristics of the persons or groups targeted by the high-risk AI system, as necessary for its intended purpose and the specific geographical, behavioral, or functional setting.

  • Providers of high-risk AI systems may process special categories of personal data (as defined in relevant regulations) for bias monitoring, detection, and correction, subject to appropriate safeguards for the rights and freedoms of individuals, such as technical limitations, security measures, pseudonymization, or encryption where anonymization may significantly impact the intended purpose.

  • Appropriate data governance and management practices should also apply to the development of high-risk AI systems not using data training techniques to ensure compliance with these requirements.

What must be included in the technical documentation for a high-risk AI system?

The technical documentation should contain, at a minimum, the elements specified in Annex IV of the Act. A summary of the requirements is as follows:-

1. General description of the AI system, including its intended purpose, development details, interaction with external hardware or software, software/firmware versions, forms in which the system is placed on the market, hardware requirements, and instructions for use/installation.

2. Detailed description of the AI system elements and development process, including methods and steps taken, design specifications, system architecture, computational resources used, data requirements (including data sources and labeling procedures), human oversight measures, description of pre-determined changes, and validation/testing procedures.

3. Information about the monitoring, functioning, and control of the AI system, including performance capabilities and limitations (accuracy levels), risks to health, safety, fundamental rights, and potential discrimination, human oversight measures, and specifications on input data.

4. Description of the risk management system according to Article 9 of the Act.

5. Documentation of any changes made to the system throughout its lifecycle.

6. List of harmonized standards applied to the system, with references published in the Official Journal of the European Union.

7. Copy of the EU declaration of conformity.

8. Detailed description of the post-market performance evaluation system, including the post-market monitoring plan as required by Article 61 of the Act.

Search for "Annex IV" on this page to learn more about the requirements.

What should be done if a high-risk AI system is related to a product covered by other legal acts?

If a high-risk AI system is related to a product covered by legal acts listed in Annex II, section A of the Act, a single technical documentation should be prepared. This documentation must include all the information outlined in Annex IV of the AI Act as well as the information required under the listed legal acts.

That's all the questions I have gathered so far, however I will be adding more FAQ'S to this page.

Meanwhile if you have questions or seek assistance in implementing ethics in AI development, reach out to us via our contact form.