The much-anticipated EU AI Act will not be the last piece of legislation that the EU Parliament hopes to ratify. The Artificial Intelligence liability directive (AILD) aims to adapt non-contractual civil liability rules to artificial intelligence. The new rules intend to ensure that persons harmed by AI systems enjoy the same level of protection as persons harmed by other technologies in the EU.
Lets look at some pertinent questions related to this directive.
What is the goal of the Artificial Intelligence liability directive (AILD)?
The goal and purpose of the AI liability directive is to establish uniform requirements for non-contractual civil liability for damage caused by AI systems, with the aim of improving the functioning of the internal market and ensuring equivalent protection for victims of AI-related damage as victims of damage caused by products in general.
It is necessary to have a separate liability directive for AI due to the complexity of enforcing liability rules in the context of emerging digital technologies like AI, the internet of things, and robotics. The liability rules that determine how damage caused by human activities or goods can be compensated have proven to be particularly complex in relation to AI. This complexity has resulted in a lack of trust in AI technologies among EU citizens and businesses. While AI applications are generally seen as potentially useful, they are also perceived as risky, leading to a lower level of adoption. A recent EU survey highlighted that liability for potential damages is one of the major external challenges to AI adoption in the EU, with 33% of enterprises expressing concerns about it. Therefore, the European Commission has proposed the AI liability directive to address these challenges, encourage trust in AI technologies, and create the necessary investment stability for the successful uptake of AI products and services in the Union.
Why are existing liability rules not sufficient to address liability from AI systems?
Existing liability rules are not sufficient to address the liability concerns of AI systems due to several reasons highlighted in the article. Firstly, there is uncertainty regarding the classification of intangible elements such as digital content, software, and data as products under the Product Liability Directive (PLD). This creates legal ambiguity regarding compensation for damage caused by software, including software updates, and raises questions about liability for such damage.
Secondly, new technologies, including AI, introduce new risks such as cybersecurity vulnerabilities and safety concerns related to data inputs. However, the PLD currently only provides for compensation in cases of physical or material damage, leaving a gap in addressing these emerging risks.
Thirdly, AI systems possess unique characteristics such as opacity, lack of transparency, autonomous behavior, continuous adaptation, and limited predictability. These characteristics make it challenging to meet the burden of proof required for successful claims under current liability rules. Victims typically need to prove the existence of damage, fault of the liable party, and the causal link between the fault and the damage. However, AI systems make it excessively difficult or even impossible for victims to identify and prove the fault or defect and establish the causal link between the fault/defect and the damage suffered.
The lack of clarity and adequacy in existing liability rules for AI systems can lead to divergent approaches by national courts, resulting in fragmentation of liability rules across the EU. This fragmentation creates legal uncertainty for businesses operating in multiple Member States and hinders victims' ability to obtain compensation for harm caused by AI products. The article highlights that this compensation gap undermines citizens' trust in AI and challenges the fairness and effectiveness of the legal and judicial system in handling claims involving AI systems.
What is the objective of the AI liability directive?
The objective of the AI liability directive is to promote the rollout of trustworthy AI and maximize its benefits for the internal market. It also aims to reduce legal uncertainty for businesses involved in AI development or use and prevent the emergence of fragmented AI-specific adaptations of national civil liability rules.
What is the scope of the proposed AI liability directive?
The proposed AI liability directive seeks to harmonize non-contractual civil liability rules for damage caused by AI systems. It applies to any type of victim, whether individuals or businesses, who suffer harm due to the fault or omission of AI providers, developers, or users, resulting in damage covered by national law.
Does the AI liability directive cover high-risk AI systems only?
No, the AI liability directive applies to damage caused by AI systems, irrespective of whether they are defined as high-risk or not under the EU AI act.
Learn more about how the EU AI Act classifies AI systems in this article.
Does the AI liability directive affect existing rules in other EU legislation?
The AI liability directive does not affect existing rules laid down in other EU legislation, such as those regulating liability conditions in the field of transport, the proposed revision of the Product Liability Directive, or the Digital Services Act.
Does the AI liability directive address criminal liability?
The AI liability directive does not apply to criminal liability. However, it may be applicable to state liability, as state authorities are subject to the obligations outlined in the AI act.
How does the AI liability directive differ from the revised Product Liability Directive (PLD)?
A: The revised PLD focuses on modernizing the existing EU no-fault-based product liability regime and applies to claims made by private individuals against manufacturers for damage caused by defective products. In contrast, the AI liability directive proposes a targeted reform of national fault-based liability regimes and applies to claims made by any natural or legal person against any person for faults influencing the AI system that caused the damage.
What are the concerns regarding impact of the directive on innovation?
The concerns regarding innovation are that the proposed rules in the AI liability directive could have a chilling effect on innovation in the tech industry. Some industry associations, such as the App Association, the Developers Alliance, and the computers and communications industry association (CCIA), believe that the rules will hurt businesses, lead to extensive liability claims, and increase business and insurance costs.
They argue that the provisions requiring AI developers to disclose confidential information and the broad presumptions listed in Article 4 would disproportionately harm small businesses and act as a disincentive for innovators, entrepreneurs, and investors.
These associations emphasize that excessive regulation and the potential curtailment of AI-powered innovations could hinder their full potential. MedTech Europe also questions the need for a separate directive for civil liability involving AI systems and finds the presumptions listed in Article 4 to be too wide-ranging. The Information Technology Industry Council (ITI) calls for coherence between the AI act and the AI liability directive, more safeguards for disclosure orders, and stricter conditions for triggering the causation presumption.
According to some experts, providers of AI systems will find it difficult to adequately protect themselves from liability, as they will have to comply with several product safety and liability regulations, including potential claims under the new AI liability directive and the PLD, and the forthcoming AI act. As a result, there is a risk of substantial chilling effect on AI innovation in Europe.
Thats all for now regarding the EU AI Liability Directive (AILD), I will update this stub as and when I learn more about this piece of legislation. Stay tuned.
If you have questions regarding AI Governance or AI Ethics and its impact on your business, use our contact form to reach out to us.