EU AI Act FAQ - Part 2

EU AI Act FAQ - Part 2

EU AI Act FAQ - Part 2

Part 2 of the EU AI Act FAQ

Here is a link to part 1 for reference.

How does the EU AI Act complement the General Data Protection Regulation (GDPR) and the Law Enforcement Directive?

The EU AI Act complements the GDPR and the Law Enforcement Directive by introducing a set of harmonized rules applicable to the design, development, and use of certain high-risk AI systems. It also includes restrictions on certain uses of remote biometric identification systems. The proposal aims to minimize the risk of algorithmic discrimination and ensures compliance with data protection and privacy requirements throughout the lifecycle of AI systems.

How does the EU AI Act integrate with existing safety legislation for high-risk AI systems related to products?

The EU AI Act integrates with existing sectoral safety legislation to ensure consistency, avoid duplications, and minimize additional burdens. For AI systems related to products covered by the New Legislative Framework (NLF) legislation (such as machinery, medical devices, and toys), the requirements set out in the EU AI Act will be checked as part of the existing conformity assessment procedures. The proposal acknowledges that NLF legislation aims to ensure the overall safety of the final product and may contain specific requirements for the safe integration of AI systems into those products.

How does the proposal fit in with other initiatives of the Commission?

The proposal ensures consistency and complementarity with other ongoing or planned initiatives, such as the revision of sectoral product legislation and initiatives addressing liability issues related to new technologies like AI systems.

The proposal aligns with the Commission's overall digital strategy, promoting technology that works for people and ensuring AI is developed in ways that respect people's rights and earn their trust.

The promotion of AI-driven innovation is closely linked to initiatives like the Data Governance Act, the Open Data Directive, and the EU strategy for data. These initiatives establish trusted mechanisms and services for the sharing, re-use, and pooling of data essential for developing high-quality AI models.

Impact of the EU AI Act on Businesses

Now lets see how the EU AI Act will impact your business.

What are the requirements for high-risk AI systems under the preferred option?

The requirements for high-risk AI systems include data, documentation, and traceability, provision of information and transparency, human oversight, and robustness and accuracy. These requirements would be mandatory.

Read more about what qualifies as a high risk AI System on our blog here.

What are the costs associated with compliance for high-risk AI systems in percentage terms?

The estimated costs for compliance with specific requirements and obligations for high-risk AI systems are approximately EUR €6,000 to EUR €7,000. Considering the total cost of an average high-risk AI system of around EUR €170,000 by 2025, this represents approximately 5% of the system's value. Additionally, there may be annual costs for ensuring human oversight, ranging from approximately EUR €5,000 to EUR €8,000 per year, which accounts for approximately 2.9% to 4.7% of the system's value. Verification costs for suppliers of high-risk AI could amount to EUR €3,000 to EUR €7,500, representing an additional 1.8% to 4.4% of the system's value.

To know more about how to carry out assessments of high risk AI systems, read our blog post on IEEE CertifAIEd.

Under what conditions would the Regulation apply to high-risk AI systems that were placed on the market or put into service before the EU AI act is legislated?

The Regulation would apply to high-risk AI systems that were placed on the market or put into service before the specified date only if significant changes are made to their design or intended purpose from that date onward. Otherwise, they would be exempted from the Regulation's requirements.

Read more about the impact of the EU AI Act on various industries here.

What are the maximum administrative fines for non-compliance under the EU AI Act?

Non-compliance with the prohibition of certain artificial intelligence practices under Article 5 and non-compliance of the AI system with the requirements laid down in Article 10 are subject to administrative fines of up to 500,000 EUR. Non-compliance with any other requirements or obligations under the Regulation, excluding Articles 5 and 10, can result in administrative fines of up to 250,000 EUR.

To read more about Article 5 & 10, click here.

Impact of the EU AI Act on member states

What will Member States need to do in relation to the implementation of the legislative requirements?

Member States will need to designate supervisory authorities responsible for implementing the legislative requirements. These authorities will require sufficient technological expertise, human resources, and financial resources to carry out their supervisory function effectively.

Can Member States use existing arrangements for the supervisory function?

Yes, Member States can build on existing arrangements, such as those related to conformity assessment bodies or market surveillance, for the supervisory function. However, they will need to ensure that these arrangements have the necessary technological expertise and resources to fulfill the new requirements.

What is the estimated staffing requirement for the supervisory function in each Member State?

Depending on the pre-existing structure in each Member State, the staffing requirement for the supervisory function could range from 1 to 25 Full-Time Equivalents (FTEs) per Member State. This would depend on factors like the complexity of AI systems and the existing infrastructure for oversight and regulation.

Read part-3 of the FAQ here.

Meanwhile if you have questions or seek assistance in implementing ethics in AI development, reach out to us via our contact form.

AI governance maturity assessment for free, click link below.