Artificial Intelligence and civil liability. Who pays the damages? – Lexology

Following COVID-19, the use of AI in both public and private sectors seems unavoidable. Many industries stand to benefit greatly from its use; for example; in the manufacturing industry (e.g. industrial robots), in transport (e.g. autonomous vehicles), in financial markets, health and medical care (e.g. medical robots, diagnostic tools and assistive technology), as well as more generally, for example, for self-cleaning public places. However, there are risks related to AI, including its opacity or, as often referred to, its black box features. European Institutions have been trying to address this, and the related issues for a number of years, but the spotlight, post-COVID-19, is now firmly focused on AI.

The European Commission has drafted many documents and including a White Paper on Artificial Intelligence (19 February 2020), with a subsequent draft report detailing recommendations to the Commission on a civil liability regime for AI. This document also suggested a motion calling for a European Parliament Resolution and drawing up a European Parliament and Council Regulation on liability relating to the operation of AI-systems (27 April 2020).

A further study was commissioned by the Policy Department C, at the request of the Committee on Legal Affairs. This study on Artificial Intelligence and Civil Liability was published on 14 July 2020.

In all of these documents, the European Institutions and expert groups stress that a key issue arising from the use of AI (in public or private sectors) is the liability for potential damages, in relation to the use of, or defects caused by AI tools. Many AI-systems depend on external data and are vulnerable to cybersecurity breaches. With opacity and increased autonomy in AI, it becomes increasingly difficult to identify the liable party and the harmed individual, making it challenging to obtain compensation.

Currently, the Product Liability Directive (85/374/CEE) is the framework governing such liability. This directive has been implemented in national member states and it places liability on the producer for damages caused by a product defect. The consumer and generally the injured person has to show evidence of the causal link between the defect of the product and the damage. In a case of damages caused by an AI tool, this, is not so easy to prove.

Nonetheless, these experts stress that a complete review of the general European legal framework on civil liability is not required, but it is necessary to adapt the legislation in force and introduce new provisions.

In light of this, the draft report of the European Parliament includes a proposal for a regulation of the European Parliament and of the Council on liability for the operation of AI-systems. This regulation, if approved, would introduce a new form of liability for the party deploying the AI-system - defined as the person who decides on the use of AI-systems, exercises control over the associated risks and benefits from its operation.

Notably, the proposed regulation provides for a strict liability for high-risk AI-systems, these are systems that display intelligent behavior (see art. 3 and 4). In line with other legislation regarding civil liability in critical and high-risks sectors, the proposed regulation provides for a compulsory insurance cover. Additionally, the proposed regulation establishes the maximum amount of compensation damages.

By contrast, according to art. 8 of the proposed regulation, the deployer of an AI-system not defined as a high-risk AI-system in accordance to the provisions of the regulation, shall be subjected to fault-based liability for any harm or damage caused by a physical or virtual activity, device or process driven by the AI-system.

In short, this means a double track of liability based on the risk of the activity.

Reflecting the traditional principles of civil liability, the proposed regulation introduces other provisions regarding; damages, limitation period, multiple tortfeasors, and so on. In line with other documents that address the issue of liability, the proposed new regulation attempts to find a balance between the protection of user rights and collectivity, and the creation of new and innovative technologies.

Finally, as technology changes faster than legislation in many cases, even the newest legislation may not cover every challenge posed by AI. In the interim, general rules and principles in force should be applied in every legal system, as the law continues to change and adapt.

Read more:

Artificial Intelligence and civil liability. Who pays the damages? - Lexology

Related Posts

Comments are closed.