Outcomes of the final negotiation round on EU AI Act

Outcomes of the final negotiation round on EU AI Act

Outcomes of the final negotiation round on EU AI Act

December 10, 2023

December 10, 2023

December 10, 2023

The final negotiations round of the EU AI Act, also known as the final trilogue in Brussels jargon, started on Wednesday 06th December 2023 and was wrapped up on Friday 08th December 2023. There was at least one 22 hour long marathon discussion in order to reach an agreement with a lot of back and forth discussions between the member states. That said the final text of the negotiations is expected to be compiled and made available only in Jan 2024 and there is still a lot of technical work needed to iron out the details in the coming weeks.

I have tried to compile and summairze information available from reporters present during the negotiations in this blog post.

Prohibited systems in the upcoming EU AI Act

Based on this interview with Luca Bertuzzi, technology editor at EURACTIV, the member states agreed to prohibit the following in the latest round of negotiations of the EU AI Act:

1) Social scoring systems - Systems that exploit vulnerabilities like disabilities or use manipulative techniques.

2) Systems like Clearview AI that scrape facial images from the internet to create databases.

3) Emotion recognition systems in the workplace and education (with some exceptions like driver drowsiness prevention).

4) Predictive policing systems that make individual risk assessments to infer if someone will commit a crime.

5) Biometric categorization systems that categorize people based on sensitive traits like race, political opinions, or religious beliefs.

The member states pushed back on some prohibitions proposed by the European Parliament, especially related to law enforcement uses, but compromises were reached to allow narrow exemptions for things like preventing terrorist attacks while aiming to avoid mass surveillance. The details of some of these exemptions still need to be worked out.

Repurcussions of violating EU AI Act rules

Based on the article, organizations that operationalize prohibited AI systems under the EU AI Act can face severe financial penalties:

1) For using a banned AI application, the minimum fine is 35 million euros, but it can go up to 6.5% of the company's global turnover.

2) For violations of obligations around high-risk AI systems, the minimum fine is 15 million euros and/or up to 3% of global turnover.

3) For lesser violations, fines start at half a million euros and 1.5% of turnover.

In addition to fines based on turnover, the most serious violations could result in the European Commission taking emergency action to ban the AI system from the EU market entirely.

So in short, the punishment for organizations launching prohibited systems can include substantial fines in the millions or billions of euros depending on company size, as well as potential bans blocking the system from operational use in the EU. These strict liabilities aim to prevent problematic AI applications from spreading in Europe.

Impact on open source models

Mistral - the French startup famous for its 7B series of models, released their latest one via a torrent link, without any information accompanying it.

While the unceremonous release was met with a lot of interest on the internet, policy advisors such as Kris Shrishak speculated that such acts will not be exempted simply because the model is open source. Is that true though??

Here is my take based on the ammendments seen in the draft proposal:-

The EU AI Act has a positive impact on open-source AI software. It exempts free and open-source AI components from most regulations, except when part of high-risk AI systems. Collaborative development and open repositories are not considered market placement. Commercial activities around open-source AI may include charging for software, technical support, or using personal data for purposes beyond security and compatibility. Developers of open-source AI components are not compelled to comply with AI value chain requirements. They are encouraged to adopt documentation practices like model and data cards for transparency. The Act recognizes the economic and innovation value of open-source AI, estimating contributions of EUR 65 billion to EUR 95 billion to the EU's GDP. It aims to foster trust and collaboration in the AI ecosystem.

Provisions to regulate foundation models

While the last trialogue ended in a stalemate, primarily because of disagreements over regulation of foundation models. This one seemed to have ended in several agreements. Regulators who wanted stricter controls have been upset at the seemingly looseing of regulation aimed at this category of artificial intelligence.

Foundation models are a recent development, in which AI models are developed from algorithms designed to optimize for generality and versatility of output. Those models are often trained on a broad range of data sources and large amounts of data to accomplish a wide range of downstream tasks, including some for which they were not specifically developed and trained. The foundation model can be unimodal or multimodal, trained through various methods such as supervised learning or reinforced learning. AI systems with specific intended purpose or general purpose AI systems can be an implementation of a foundation model, which means that each foundation model can be reused in countless downstream AI or general purpose AI systems. These models hold growing importance to many downstream applications and systems.

The EU AI Act has significant implications for foundation models in AI systems:

1. Foundation models face increased regulatory scrutiny due to their complexity and potential impact.

2. Providers must assess and mitigate foreseeable risks to health, safety, rights, environment, and democracy.

3. Strict data governance measures, including bias assessment, are required for dataset incorporation.

4. Models must maintain performance, predictability, interpretability, and cybersecurity.

5. Emphasis on energy efficiency aligns with environmental goals.

6. Thorough technical documentation and user instructions are essential for compliance.

7. Quality management systems ensure adherence to regulations.

8. Registration in a public EU database is mandatory.

9. Providers must retain technical documentation for ten years.

10. Generative AI models must meet additional transparency and content legality requirements.

EURACTIV has also reported that only models that have been trained with 10^26 FLOPs of compute power will be considered for regulation. This leaves most models out of the ambit of the act, however this has not been confirmed.

The final text of the act is expected to be prepared by end of January 2024 and the next several weeks is expected to be spent by the technical teams to iron out the details.

The final text is then presented to both the european parliament and council for debate and modifications and only then will it be ratified into a law. And even once the law is passed, organisations and member states will have until 2025 to comply with the laws as most of them are not at all prepared for its implementation.