am_com_nonlegreport · web viewhaving regard to judgment of the court (first chamber) of 4 june...

197
European Parliament 2019-2024 Committee on Legal Affairs 2020/2014(INL) 28.5.2020 AMENDMENTS 1 - 250 Draft report Axel Voss (PE650.556v01-00) Civil liability regime for artificial intelligence (2020/2014(INL)) AM\1205914EN.docx PE652.460v01-00 EN United in diversity EN

Upload: others

Post on 15-Mar-2021

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

European Parliament2019-2024

Committee on Legal Affairs

2020/2014(INL)

28.5.2020

AMENDMENTS1 - 250Draft reportAxel Voss(PE650.556v01-00)

Civil liability regime for artificial intelligence(2020/2014(INL))

AM\1205914EN.docx PE652.460v01-00

EN United in diversity EN

Page 2: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

AM_Com_NonLegReport

PE652.460v01-00 2/134 AM\1205914EN.docx

EN

Page 3: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

Amendment 1Sergey LagodinskyMotion for a resolutionCitation 1 a (new)

Motion for a resolution Amendment

- having regard to Article 169 of the Treaty on the Functioning of the European Union,

Or. en

Amendment 2Sergey LagodinskyMotion for a resolutionCitation 3 a (new)

Motion for a resolution Amendment

- having regard to Directive (EU) 2019/770 of the European Parliament and of the Council of 20 May 2019 on certain aspects concerning contracts for the supply of digital content and digital services,

Or. en

Amendment 3Liesje Schreinemacher, Adrián Vázquez Lázara, Javier Nart, Karen MelchiorMotion for a resolutionCitation 4 a (new)

Motion for a resolution Amendment

- having regard to the Interinstitutional Agreement of 13 April 2016 on Better Law-Making and the Better Regulations Guidelines,

Or. en

Amendment 4Kosma ZłotowskiMotion for a resolutionCitation 23 a (new)

AM\1205914EN.docx 3/134 PE652.460v01-00

EN

Page 4: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

Motion for a resolution Amendment

- having regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08),

Or. en

Amendment 5Gilles Lebreton, Gunnar Beck, Jean-Paul Garraud, Luisa RegimentiMotion for a resolutionCitation 23 a (new)

Motion for a resolution Amendment

- having regard to Council Directive 2000/78/EC of 27 November 2000 establishing a general framework for equal treatment in employment and occupation1a,

____________________1a OJ L 303, 2.12.2000, p. 16.

Or. fr

Amendment 6Gilles Lebreton, Gunnar Beck, Jean-Paul Garraud, Luisa RegimentiMotion for a resolutionCitation 23 b (new)

Motion for a resolution Amendment

- having regard to the directives on equal treatment of men and women with regard to employment and access to goods and services,

Or. fr

Amendment 7Gilles Lebreton, Gunnar Beck, Jean-Paul Garraud, Luisa RegimentiMotion for a resolutionCitation 23 c (new)

Motion for a resolution Amendment

PE652.460v01-00 4/134 AM\1205914EN.docx

EN

Page 5: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

- having regard to various consumer protection rules such as the Unfair Commercial Practices Directive (Directive 2005/29/EC) and the Consumer Rights Directive (Directive 2011/83/EC),

Or. fr

Amendment 8Gilles Lebreton, Gunnar Beck, Jean-Paul Garraud, Luisa RegimentiMotion for a resolutionCitation 23 d (new)

Motion for a resolution Amendment

- having regard to Directive 2009/48/EC on the safety of toys,

Or. fr

Amendment 9Gilles Lebreton, Gunnar Beck, Jean-Paul Garraud, Luisa RegimentiMotion for a resolutionCitation 23 e (new)

Motion for a resolution Amendment

- having regard to European Council Decision 2017/745 on medical devices amending Directive 2001/83/EC and applicable from 26 May 2020,

Or. fr

Amendment 10Sergey LagodinskyMotion for a resolutionRecital A

Motion for a resolution Amendment

A. whereas the concept of ‘liability’ plays an important double role in our daily life: on the one hand, it ensures that a person who has suffered harm or damage is entitled to claim compensation from the party proven to be liable for that harm or damage, and on the other hand, it provides the economic incentives for natural and

A. whereas the concept of ‘liability’ plays an important double role in our daily life: on the one hand, it ensures that a person who has suffered harm or damage is entitled to claim and receive compensation from the party proven to be liable for that harm or damage, and on the other hand, it provides the economic incentives for

AM\1205914EN.docx 5/134 PE652.460v01-00

EN

Page 6: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

legal persons to avoid causing harm or damage in the first place;

natural and legal persons to avoid causing harm or damage in the first place or price the risk of having to compensate into their behaviour;

Or. en

Amendment 11Evelyne Gebhardt, Brando BenifeiMotion for a resolutionRecital A

Motion for a resolution Amendment

A. whereas the concept of ‘liability’ plays an important double role in our daily life: on the one hand, it ensures that a person who has suffered harm or damage is entitled to claim compensation from the party proven to be liable for that harm or damage, and on the other hand, it provides the economic incentives for natural and legal persons to avoid causing harm or damage in the first place;

A. whereas the concept of ‘liability’ plays an important double role in our daily life: on the one hand, it ensures that a person who has suffered harm or damage is entitled to claim compensation from the party proven to be liable for that harm or damage, and on the other hand, it provides the incentives for natural and legal persons to avoid causing harm or damage in the first place and discourages irresponsible behaviour;

Or. en

Amendment 12Sergey LagodinskyMotion for a resolutionRecital A a (new)

Motion for a resolution Amendment

Aa. whereas Artificial Intelligence and algorithmic decision-making create new consumer and societal challenges and further amplify existing challenges (e.g. privacy, behaviour tracking) that deserve particular attention by policy makers, while liability rules play a key role in enabling trust of citizens in Artificial Intelligence technologies and in the business actors involved.

Or. en

PE652.460v01-00 6/134 AM\1205914EN.docx

EN

Page 7: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

Amendment 13Daniel BudaMotion for a resolutionRecital B

Motion for a resolution Amendment

B. whereas any future-orientated liability framework has to strike a balance between efficiently protecting potential victims of harm or damage and at the same time, providing enough leeway to make the development of new technologies, products or services possible; whereas ultimately, the goal of any liability framework should be to provide legal certainty for all parties, whether it be the producer, the deployer, the affected person or any other third party;

B. whereas any future-orientated liability framework has to instil confidence in the safety, reliability and consistency of products and services, including digital technology, in order to strike a balance between efficiently protecting potential victims of harm or damage and at the same time, providing enough leeway to make the development of new technologies, products or services possible, especially in the artificial intelligence sector; whereas ultimately, the goal of any liability framework should be to provide consumer protection and legal certainty for all parties, whether it be the producer, the deployer, the affected person or any other third party;

Or. ro

Amendment 14Emil RadevMotion for a resolutionRecital B

Motion for a resolution Amendment

B. whereas any future-orientated liability framework has to strike a balance between efficiently protecting potential victims of harm or damage and at the same time, providing enough leeway to make the development of new technologies, products or services possible; whereas ultimately, the goal of any liability framework should be to provide legal certainty for all parties, whether it be the producer, the deployer, the affected person or any other third party;

B. whereas any future-orientated civil liability legal framework has to strike a balance between efficiently protecting potential victims of harm or damage and, at the same time, providing enough leeway to make it possible for enterprises, and particularly small and medium-sized enterprises, to develop new technologies, products or services; whereas this will help build confidence and create stability for investment; whereas ultimately, the goal of any liability framework should be to provide legal certainty for all parties, whether it be the producer, the deployer,

AM\1205914EN.docx 7/134 PE652.460v01-00

EN

Page 8: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

the affected person or any other third party;

Or. bg

Amendment 15Emmanuel Maurel, Manon AubryMotion for a resolutionRecital B

Motion for a resolution Amendment

B. whereas any future-orientated liability framework has to strike a balance between efficiently protecting potential victims of harm or damage and at the same time, providing enough leeway to make the development of new technologies, products or services possible; whereas ultimately, the goal of any liability framework should be to provide legal certainty for all parties, whether it be the producer, the deployer, the affected person or any other third party;

B. whereas the goal of any liability framework should be to provide legal certainty for all parties, whether it be the producer, the deployer, the affected person or any other third party; whereas that framework cannot be subject to strictly profit-making considerations and whereas its objectives include the limitation of the hazardous nature of the products made available to the public;

Or. fr

Amendment 16Evelyne Gebhardt, Brando Benifei, Lara WoltersMotion for a resolutionRecital B

Motion for a resolution Amendment

B. whereas any future-orientated liability framework has to strike a balance between efficiently protecting potential victims of harm or damage and at the same time, providing enough leeway to make the development of new technologies, products or services possible; whereas ultimately, the goal of any liability framework should be to provide legal certainty for all parties, whether it be the producer, the deployer, the affected person or any other third party;

B. whereas any future-orientated liability framework has to ensure that affected persons are appropriately protected against damage and that they are able to claim for compensation in all cases where this seems justified, the goal of any liability framework should be to provide legal certainty for all parties, whether it be the producer, the manufacturer, the developer, the programmer, the backend operator, the frontend operator, the affected person or any other third party;

Or. en

PE652.460v01-00 8/134 AM\1205914EN.docx

EN

Page 9: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

Amendment 17Liesje Schreinemacher, Adrián Vázquez Lázara, Javier Nart, Karen MelchiorMotion for a resolutionRecital B

Motion for a resolution Amendment

B. whereas any future-orientated liability framework has to strike a balance between efficiently protecting potential victims of harm or damage and at the same time, providing enough leeway to make the development of new technologies, products or services possible; whereas ultimately, the goal of any liability framework should be to provide legal certainty for all parties, whether it be the producer, the deployer, the affected person or any other third party;

B. whereas any future-orientated liability framework has to strike a balance between efficiently protecting potential victims of harm or damage and at the same time, providing enough leeway to make the development of new technologies, products or services possible; whereas ultimately, the goal of any liability framework should be to provide legal certainty for all parties, whether it be the producer, the deployer, the developer, the affected person or any other third party;

Or. en

Amendment 18Sergey LagodinskyMotion for a resolutionRecital B

Motion for a resolution Amendment

B. whereas any future-orientated liability framework has to strike a balance between efficiently protecting potential victims of harm or damage and at the same time, providing enough leeway to make the development of new technologies, products or services possible; whereas ultimately, the goal of any liability framework should be to provide legal certainty for all parties, whether it be the producer, the deployer, the affected person or any other third party;

B. whereas any future-orientated liability framework has to strike a balance between efficiently and fairly protecting potential victims of harm or damage and at the same time, providing enough leeway to make the development of new technologies, products or services possible; whereas ultimately, the goal of any liability framework should be to provide legal certainty for all parties, whether it be the producer, the operator, the affected person or any other third party;

Or. en

Amendment 19Kosma ZłotowskiMotion for a resolutionRecital C a (new)

AM\1205914EN.docx 9/134 PE652.460v01-00

EN

Page 10: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

Motion for a resolution Amendment

Ca. whereas all legislative activities in the Union, related to the explicit assignment of responsibility as regards AI-systems, should be preceded by analysis and consultation with the Member States on the compliance of the proposed regulations with economic, legal and social conditions;

Or. en

Amendment 20Kosma ZłotowskiMotion for a resolutionRecital C b (new)

Motion for a resolution Amendment

Cb. whereas the issue of the civil liability regime for artificial intelligence should be the subject of a broad public debate, taking into account ethical, legal, economic and social aspects, to avoid misunderstandings and unjustified fears that this technology may cause among citizens;

Or. en

Amendment 21Evelyne Gebhardt, Brando Benifei, Lara WoltersMotion for a resolutionRecital D

Motion for a resolution Amendment

D. whereas the legal system of a Member State can exclude liability for certain actors or can make it stricter for certain activities; whereas strict liability means that a party can be liable despite the absence of fault; whereas in many national tort laws, the defendant is held strictly liable if a risk materializes which that defendant has created for the public, such as in the form of cars or hazardous activities, or which he cannot control, like

D. whereas the legal system of a Member State can exclude liability for certain actors or can make it stricter for certain activities; whereas strict liability means that a party can be liable despite the absence of fault; whereas in many national tort laws, the defendant is held strictly liable if a risk materializes which that defendant has created for the public, such as in the form of cars or hazardous activities, or which he cannot control, like

PE652.460v01-00 10/134 AM\1205914EN.docx

EN

Page 11: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

animals; animals; whereas strict liability lies on the person that has control over the risks of the operation or is responsible for them;

Or. en

Amendment 22Emmanuel Maurel, Manon AubryMotion for a resolutionRecital D

Motion for a resolution Amendment

D. whereas the legal system of a Member State can exclude liability for certain actors or can make it stricter for certain activities; whereas strict liability means that a party can be liable despite the absence of fault; whereas in many national tort laws, the defendant is held strictly liable if a risk materializes which that defendant has created for the public, such as in the form of cars or hazardous activities, or which he cannot control, like animals;

D. whereas the legal system of a Member State can adjust liability for certain actors or can make it stricter for certain activities; whereas strict liability means that a party can be liable despite the absence of fault; whereas in many national tort laws, the defendant is held strictly liable if a risk materializes which that defendant has created for the public, such as in the form of cars or hazardous activities, or which he cannot control, like animals;

Or. fr

Amendment 23Liesje Schreinemacher, Adrián Vázquez Lázara, Javier Nart, Karen Melchior, Svenja HahnMotion for a resolutionRecital D a (new)

Motion for a resolution Amendment

Da. whereas the notion of Artificial Intelligence(AI)-systems comprises a large group of different technologies, including simple statistics, machine learning and deep learning;

Or. en

Amendment 24Evelyne Gebhardt, Brando Benifei, Lara WoltersMotion for a resolutionRecital E

AM\1205914EN.docx 11/134 PE652.460v01-00

EN

Page 12: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

Motion for a resolution Amendment

E. whereas Artificial Intelligence (AI)-systems present significant legal challenges for the existing liability framework and could lead to situations, in which their opacity could make it extremely expensive or even impossible to identify who was in control of the risk associated with the AI-system or which code or input has ultimately caused the harmful operation;

E. whereas Artificial Intelligence (AI)-systems and other emerging digital technologies, such as the Internet of Things or distributed ledger technologies present significant legal challenges for the existing liability framework and could lead to situations, in which their opacity, complexity, modification through updates or self-learning during operation, limited predictability, and vulnerability to cybersecurity threats make it extremely difficult or even impossible to identify who was in control of the risk associated with the AI-system or which code or input has ultimately caused the harmful operation;

Or. en

Amendment 25Daniel BudaMotion for a resolutionRecital E

Motion for a resolution Amendment

E. whereas Artificial Intelligence (AI)-systems present significant legal challenges for the existing liability framework and could lead to situations, in which their opacity could make it extremely expensive or even impossible to identify who was in control of the risk associated with the AI-system or which code or input has ultimately caused the harmful operation;

E. whereas Artificial Intelligence (AI)-systems present significant legal challenges for the existing liability framework and could lead to situations, in which their opacity could make it extremely expensive or even impossible to identify who was in control of the risk associated with the AI-system or which code or input has ultimately caused the harmful operation; whereas these factors could make it harder to identify the relationship between injury and the behaviour causing it, with the result that victims might not receive adequate compensation;

Or. ro

Amendment 26Sergey Lagodinsky

PE652.460v01-00 12/134 AM\1205914EN.docx

EN

Page 13: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

Motion for a resolutionRecital E

Motion for a resolution Amendment

E. whereas Artificial Intelligence (AI)-systems present significant legal challenges for the existing liability framework and could lead to situations, in which their opacity could make it extremely expensive or even impossible to identify who was in control of the risk associated with the AI-system or which code or input has ultimately caused the harmful operation;

E. whereas Artificial Intelligence (AI)-systems present significant legal challenges for the existing liability framework and could lead to situations, in which their opacity could make it extremely difficult or even impossible to identify who was in control of the risk associated with the AI-system or which code, input or data has ultimately caused the harmful operation, which should be avoided thanks to relevant transparency and information obligations to be borne by the accountable persons;

Or. en

Amendment 27Liesje Schreinemacher, Adrián Vázquez Lázara, Javier Nart, Karen Melchior, Svenja HahnMotion for a resolutionRecital E

Motion for a resolution Amendment

E. whereas Artificial Intelligence (AI)-systems present significant legal challenges for the existing liability framework and could lead to situations, in which their opacity could make it extremely expensive or even impossible to identify who was in control of the risk associated with the AI-system or which code or input has ultimately caused the harmful operation;

E. whereas certain AI-systems present significant legal challenges for the existing liability framework and could lead to situations, in which their opacity could make it extremely expensive or even impossible to identify who was in control of the risk associated with the AI-system or which code or input has ultimately caused the harmful operation;

Or. en

Amendment 28Liesje Schreinemacher, Adrián Vázquez Lázara, Javier Nart, Karen Melchior, Svenja HahnMotion for a resolutionRecital E a (new)

Motion for a resolution Amendment

AM\1205914EN.docx 13/134 PE652.460v01-00

EN

Page 14: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

Ea. Whereas the diversity of AI applications and the diverse range of risks the technology poses complicates finding a single solution suitable for the entire spectrum of risks; whereas, in this respect, an approach should be adopted in which experiments, pilots and regulatory sandboxes are used to come up with proportional and evidence-based solutions that address specific situations and sectors where needed;

Or. en

Amendment 29Emil RadevMotion for a resolutionRecital E a (new)

Motion for a resolution Amendment

Ea. whereas the lack of clear provisions on risk limitation may create legal uncertainty for enterprises offering AI-systems on the EU market and pose a danger to the persons using them;

Or. bg

Amendment 30Sergey LagodinskyMotion for a resolutionRecital F

Motion for a resolution Amendment

F. whereas this difficulty is compounded by the connectivity between an AI-system and other AI-systems and non-AI-systems, by the dependency on external data, by the vulnerability to cybersecurity breaches as well as by the increasing autonomy of AI-systems triggered by machine-learning and deep-learning capabilities;

F. whereas this difficulty results from the fact that AI-systems and non-AI-systems depend on external data and are subject to imperfection, resulting in the possibility for vulnerability to cybersecurity breaches, as well as from the design of increasingly autonomous AI-systems using, inter alia, machine-learning and deep-learning techniques;

PE652.460v01-00 14/134 AM\1205914EN.docx

EN

Page 15: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

Or. en

Amendment 31Kosma ZłotowskiMotion for a resolutionRecital F a (new)

Motion for a resolution Amendment

Fa. whereas legal certainty is an essential condition for the dynamic development of AI-based technology and its practical application in everyday life; whereas the user needs to be sure that potential damage caused by systems using the AI is covered by adequate insurance and that there is a defined legal route for redress;

Or. en

Amendment 32Daniel BudaMotion for a resolutionRecital G

Motion for a resolution Amendment

G. whereas sound ethical standards for AI-systems combined with solid and fair compensation procedures can help to address those legal challenges; whereas fair liability procedures means that each person who suffers harm caused by AI-systems or whose property damage is caused by AI-systems should have the same level of protection compared to cases without involvement of an AI-system.

G. whereas sound ethical standards for AI-systems combined with solid and fair compensation procedures can help to address those legal challenges and eliminate the risk of users being less willing to accept emerging technology; whereas fair liability procedures means that each person who suffers harm caused by AI-systems or whose property damage is caused by AI-systems should have the same level of protection compared to cases without involvement of an AI-system.

Or. ro

Amendment 33Evelyne Gebhardt, Brando BenifeiMotion for a resolutionRecital G

Motion for a resolution Amendment

AM\1205914EN.docx 15/134 PE652.460v01-00

EN

Page 16: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

G. whereas sound ethical standards for AI-systems combined with solid and fair compensation procedures can help to address those legal challenges; whereas fair liability procedures means that each person who suffers harm caused by AI-systems or whose property damage is caused by AI-systems should have the same level of protection compared to cases without involvement of an AI-system.

G. whereas AI-systems need to comply with current applicable laws and, additionally, Union and national liability regimes need to be adjusted, where necessary, in order to guarantee solid and fair compensation for affected persons; whereas fair liability procedures means that each person who suffers harm caused by AI-systems or whose property damage is caused by AI-systems should have the same level of protection compared to cases without involvement of an AI-system.

Or. en

Amendment 34Emmanuel Maurel, Manon AubryMotion for a resolutionRecital G

Motion for a resolution Amendment

G. whereas sound ethical standards for AI-systems combined with solid and fair compensation procedures can help to address those legal challenges; whereas fair liability procedures means that each person who suffers harm caused by AI-systems or whose property damage is caused by AI-systems should have the same level of protection compared to cases without involvement of an AI-system.

G. whereas sound ethical standards for AI-systems combined with solid and fair compensation procedures can help to address those legal challenges; whereas fair liability procedures means that each person who suffers tangible or intangible harm caused by AI-systems should have the same level of protection compared to cases without involvement of an AI-system;

Or. fr

Amendment 35Sergey LagodinskyMotion for a resolutionRecital G

Motion for a resolution Amendment

G. whereas sound ethical standards for AI-systems combined with solid and fair compensation procedures can help to address those legal challenges; whereas fair liability procedures means that each person

G. whereas sound ethical standards for AI-systems combined with solid and fair compensation procedures can help to address those legal challenges; whereas fair compensation procedures means that each

PE652.460v01-00 16/134 AM\1205914EN.docx

EN

Page 17: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

who suffers harm caused by AI-systems or whose property damage is caused by AI-systems should have the same level of protection compared to cases without involvement of an AI-system.

person who suffers harm caused by AI-systems or whose property damage is caused by AI-systems should have the same level of protection compared to cases without involvement of an AI-system.

Or. en

Amendment 36Liesje Schreinemacher, Adrián Vázquez Lázara, Javier Nart, Karen Melchior, Svenja HahnMotion for a resolutionRecital G a (new)

Motion for a resolution Amendment

Ga. whereas the future regulatory framework needs to take into consideration all the interests at stake; whereas careful examination of the consequences of any new regulatory framework on all actors in an impact assessment should be a prerequisite for further legislative steps; whereas the crucial role of SMEs and start-ups especially in the European economy justifies a strictly proportionate approach to enable them to develop and innovate;

Or. en

Amendment 37Daniel BudaMotion for a resolutionRecital G a (new)

Motion for a resolution Amendment

Ga. whereas civil liability standards for artificial intelligence must seek to strike a balance between the protection of the public and business incentives to invest in innovation, especially AI systems;

Or. ro

Amendment 38

AM\1205914EN.docx 17/134 PE652.460v01-00

EN

Page 18: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

Liesje Schreinemacher, Adrián Vázquez Lázara, Javier Nart, Karen MelchiorMotion for a resolutionRecital G b (new)

Motion for a resolution Amendment

Gb. whereas, on the other hand, the victims of damages caused by AI-systems need to have a right to redress and full compensation of the damages and the harms that they have suffered;

Or. en

Amendment 39Daniel BudaMotion for a resolutionParagraph 1

Motion for a resolution Amendment

1. Considers that the challenge related to the introduction of AI-systems into society and the economy is one of the most important questions on the current political agenda; whereas technologies based on A I could improve our lives in almost every sector, from the personal sphere (e.g. personalised education, fitness programs) to global challenges (e.g. climate change, hunger and starvation);

1. Considers that the challenge related to the introduction of AI-systems into society and the economy is one of the most important questions on the current political agenda, in addition to bolstering the single market in goods and services and providing better safeguards to guarantee risk minimisation and provide adequate compensation for damages sustained;

whereas technologies based on A I could improve our lives in almost every sector, from the personal sphere (e.g. personalised education, fitness programs) to global challenges (e.g. climate change, hunger and starvation);

Or. ro

Amendment 40Sergey LagodinskyMotion for a resolutionParagraph 1

Motion for a resolution Amendment

1. Considers that the challenge related to the introduction of AI-systems into

1. Considers that the challenge related to the introduction of AI-systems into

PE652.460v01-00 18/134 AM\1205914EN.docx

EN

Page 19: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

society and the economy is one of the most important questions on the current political agenda; whereas technologies based on A I could improve our lives in almost every sector, from the personal sphere (e.g. personalised education, fitness programs) to global challenges (e.g. climate change, hunger and starvation);

society, the workplace and the economy is one of important questions on the current political agenda; whereas technologies based on A I could, and should endeavour to improve our lives in almost every sector, from the personal sphere (e.g. personalised education, assistance to vulnerable persons, fitness programs), to the working environment (e.g. alleviation from tedious and repetitive tasks) and to global challenges (e.g. climate emergency, hunger and starvation);

Or. en

Amendment 41Emmanuel Maurel, Manon AubryMotion for a resolutionParagraph 1

Motion for a resolution Amendment

1. Considers that the challenge related to the introduction of AI-systems into society and the economy is one of the most important questions on the current political agenda; whereas technologies based on A I could improve our lives in almost every sector, from the personal sphere (e.g. personalised education, fitness programs) to global challenges (e.g. climate change, hunger and starvation);

1. Considers that the challenge related to the introduction of AI-systems into society, particularly the public sphere; digital, judicial and financial interactions; the workplace or the medical sphere, is one of the most important questions on the current political agenda; whereas technologies based on AI could improve our lives in almost every sector, from the personal sphere (e.g. personalised education) and the professional sphere (by eliminating repetitive tasks) to global challenges (e.g. climate change and nutrition);

Or. fr

Amendment 42Kosma ZłotowskiMotion for a resolutionParagraph 1

Motion for a resolution Amendment

1. Considers that the challenge related to the introduction of AI-systems into society and the economy is one of the most

1. Considers that the challenge related to the introduction of AI-systems into society and the economy is one of the most

AM\1205914EN.docx 19/134 PE652.460v01-00

EN

Page 20: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

important questions on the current political agenda; whereas technologies based on A I could improve our lives in almost every sector, from the personal sphere (e.g. personalised education, fitness programs) to global challenges (e.g. climate change, hunger and starvation);

important questions on the current political agenda; whereas technologies based on A I could improve our lives in almost every sector, from the personal sphere to global challenges;

Or. en

Amendment 43Gilles Lebreton, Gunnar Beck, Jean-Paul Garraud, Luisa RegimentiMotion for a resolutionParagraph 1

Motion for a resolution Amendment

1. Considers that the challenge related to the introduction of AI-systems into society and the economy is one of the most important questions on the current political agenda; whereas technologies based on A I could improve our lives in almost every sector, from the personal sphere (e.g. personalised education, fitness programs) to global challenges (e.g. climate change, hunger and starvation);

1. Considers that the challenge related to the introduction of AI-systems into society and the economy is one of the most important questions on the current political agenda; whereas technologies based on A I could improve our lives in almost every sector, from the personal sphere (e.g. personalised education, fitness programs, credit provision and court orders) to global challenges (e.g. climate change, hunger and starvation);

Or. fr

Amendment 44Caterina ChinniciMotion for a resolutionParagraph 1

Motion for a resolution Amendment

1. Considers that the challenge related to the introduction of AI-systems into society and the economy is one of the most important questions on the current political agenda; whereas technologies based on A I could improve our lives in almost every sector, from the personal sphere (e.g. personalised education, fitness programs) to global challenges (e.g. climate change, hunger and starvation);

1. Considers that the challenge related to the introduction of AI-systems into society and the economy is one of the most important questions on the current political agenda; whereas technologies based on A I could improve our lives in almost every sector, from the personal sphere (e.g. personalised education, fitness programs) to global challenges (e.g. climate change, healthcare, hunger and starvation);

PE652.460v01-00 20/134 AM\1205914EN.docx

EN

Page 21: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

Or. en

Amendment 45Marion WalsmannMotion for a resolutionParagraph 2

Motion for a resolution Amendment

2. Firmly believes that in order to efficiently exploit the advantages and prevent potential misuses, principle-based and future-proof legislation across the EU for all AI-systems is crucial; is of the opinion that, while sector specific regulations for the broad range of possible applications are preferable, a horizontal legal framework based on common principles seems necessary to establish equal standards across the Union and effectively protect our European values;

2. Firmly believes that in order to efficiently exploit the advantages and prevent potential misuses, principle-based and future-proof legislation across the EU for all AI-systems is crucial; is of the opinion that, while sector specific regulations for the broad range of possible applications are preferable, a horizontal legal framework based on common principles seems necessary to establish equal standards across the Union, effectively protect our European values and ensure legal clarity, with the legal framework being limited to filling existing legal gaps;

Or. de

Amendment 46Emmanuel Maurel, Manon AubryMotion for a resolutionParagraph 2

Motion for a resolution Amendment

2. Firmly believes that in order to efficiently exploit the advantages and prevent potential misuses, principle-based and future-proof legislation across the EU for all AI-systems is crucial; is of the opinion that, while sector specific regulations for the broad range of possible applications are preferable, a horizontal legal framework based on common principles seems necessary to establish equal standards across the Union and effectively protect our European values;

2. Firmly believes that in order to prevent potential misuses of AI systems, such systems must be governed across the EU by legislation which is future-proof and based on principles such as those set out in the EU Charter of Fundamental Rights and the report with recommendations to the Commission on an ethical framework for artificial intelligence, robotics and related technologies1a; is of the opinion that a horizontal legal framework based on these common principles is necessary to establish equal standards across the Union and effectively protect our European

AM\1205914EN.docx 21/134 PE652.460v01-00

EN

Page 22: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

values;

_____________

1a 2020/2012 (INL)

Or. fr

Amendment 47Daniel BudaMotion for a resolutionParagraph 2

Motion for a resolution Amendment

2. Firmly believes that in order to efficiently exploit the advantages and prevent potential misuses, principle-based and future-proof legislation across the EU for all AI-systems is crucial; is of the opinion that, while sector specific regulations for the broad range of possible applications are preferable, a horizontal legal framework based on common principles seems necessary to establish equal standards across the Union and effectively protect our European values;

2. Firmly believes that in order to efficiently exploit the advantages and prevent potential misuses and avoid regulatory fragmentation in the Union, principle-based and future-proof legislation across the EU for all AI-systems is crucial; is of the opinion that, while sector specific regulations for the broad range of possible applications are preferable, a horizontal and harmonised legal framework based on common principles seems necessary to establish equal standards across the Union and effectively protect our European values;

Or. ro

Amendment 48Emil RadevMotion for a resolutionParagraph 2

Motion for a resolution Amendment

2. Firmly believes that in order to efficiently exploit the advantages and prevent potential misuses, principle-based and future-proof legislation across the EU for all AI-systems is crucial; is of the opinion that, while sector specific regulations for the broad range of possible applications are preferable, a horizontal legal framework based on common principles seems necessary to establish equal standards across the Union and

2. Firmly believes that in order to efficiently exploit the advantages and prevent potential misuses, uniform, principle-based and future-proof legislation across the EU for all AI-systems is crucial; is of the opinion that, while sector specific regulations for the broad range of possible applications are preferable, a horizontal legal framework based on common principles seems necessary to establish equal standards across the Union and

PE652.460v01-00 22/134 AM\1205914EN.docx

EN

Page 23: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

effectively protect our European values; effectively protect our European values and citizens’ rights;

Or. bg

Amendment 49Kosma ZłotowskiMotion for a resolutionParagraph 2

Motion for a resolution Amendment

2. Firmly believes that in order to efficiently exploit the advantages and prevent potential misuses, principle-based and future-proof legislation across the EU for all AI-systems is crucial; is of the opinion that, while sector specific regulations for the broad range of possible applications are preferable, a horizontal legal framework based on common principles seems necessary to establish equal standards across the Union and effectively protect our European values;

2. Firmly believes that in order to efficiently exploit the advantages and prevent potential misuses, principle-based and future-proof legislation across the EU for all AI-systems is crucial; is of the opinion that, while sector specific regulations for the broad range of possible applications are preferable, a horizontal legal framework based on common principles seems necessary to establish equal standards across the Union;

Or. en

Amendment 50Daniel BudaMotion for a resolutionParagraph 3

Motion for a resolution Amendment

3. States that the Digital Single Market needs to be fully harmonized since the digital sphere is characterized by rapid cross-border dynamics and international data flows; considers that the Union will only achieve the objectives of maintaining EU’s digital sovereignty and of boosting digital innovation made in Europe with consistent and common rules;

3. States that the Digital Single Market needs to be fully harmonized and constantly updated since the digital sphere is characterized by rapid cross-border dynamics and international data flows; considers that the Union will only achieve the objectives of maintaining EU’s digital sovereignty and of boosting digital innovation made in Europe with consistent and common rules in line with a culture of innovation;

Or. ro

AM\1205914EN.docx 23/134 PE652.460v01-00

EN

Page 24: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

Amendment 51Sergey LagodinskyMotion for a resolutionParagraph 3

Motion for a resolution Amendment

3. States that the Digital Single Market needs to be fully harmonized since the digital sphere is characterized by rapid cross-border dynamics and international data flows; considers that the Union will only achieve the objectives of maintaining EU’s digital sovereignty and of boosting digital innovation made in Europe with consistent and common rules;

3. States that the Digital Single Market needs to be fully harmonized since the digital sphere is characterized by rapid cross-border dynamics and international data flows; considers that the Union can contribute to objectives such as building capacity and capability within the EU and boosting digital innovation made in Europe with consistent and common rules;

Or. en

Amendment 52Kosma ZłotowskiMotion for a resolutionParagraph 3 a (new)

Motion for a resolution Amendment

3a. Notes that the global Artificial Intelligence race is already underway and that the Union should play in it a leading role by exploiting its scientific and technological potential; strongly emphasises that technology development must not come at the expense of protecting users from damage that can be caused by devices and systems using the AI; encourages the promotion at international level of the standards on civil liability in the context of the AI developed in the Union;

Or. en

Amendment 53Daniel BudaMotion for a resolutionParagraph 3 a (new)

Motion for a resolution Amendment

PE652.460v01-00 24/134 AM\1205914EN.docx

EN

Page 25: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

3a. Takes the view that artificial intelligence will create unprecedented opportunities and advantages for society and that the objective of EU decision makers should be to make Europe a world leader in AI; highlights the need in this regard to establish a clear, predictable legal framework that meets technological challenges effectively without hampering innovation;

Or. ro

Amendment 54Daniel BudaMotion for a resolutionParagraph 3 b (new)

Motion for a resolution Amendment

3b. Considers that, in the global context of digitalisation and emerging digital technologies, international cooperation for the purpose of standardisation is particularly relevant to the competitiveness of European businesses;

Or. ro

Amendment 55Emmanuel Maurel, Manon AubryMotion for a resolutionParagraph 4

Motion for a resolution Amendment

4. Firmly believes that the new common rules for AI-systems should only take the form of a regulation; considers that the question of liability in cases of harm or damage caused by an AI-system is one of the key aspects to address within this framework;

4. Considers that the question of liability in cases of harm or damage caused by an AI-system is a key aspect of this harmonisation and that it must therefore be dealt with in the framework of a regulation;

Or. fr

AM\1205914EN.docx 25/134 PE652.460v01-00

EN

Page 26: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

Amendment 56Caterina ChinniciMotion for a resolutionParagraph 4 a (new)

Motion for a resolution Amendment

4a. Underlines the key importance of the principle of transparency in the context of liability rules;

Or. en

Amendment 57Daniel BudaMotion for a resolutionParagraph 5

Motion for a resolution Amendment

5. Believes that there is no need for a complete revision of the well-functioning liability regimes but that the complexity, connectivity, opacity, vulnerability and autonomy of AI-systems nevertheless represent a significant challenge; considers that specific adjustments are necessary to avoid a situation in which persons who suffer harm or whose property is damaged end up without compensation;

5. Believes that there is no need for a complete revision of the well-functioning liability regimes but that the complexity, connectivity, opacity, vulnerability and autonomy of AI-systems nevertheless represent a significant challenge to the effectiveness of national and Union liability framework provisions; considers that specific and coordinated adjustments are necessary to avoid a situation in which persons who suffer harm or whose property is damaged end up without compensation;

Or. ro

Amendment 58Evelyne Gebhardt, Brando BenifeiMotion for a resolutionParagraph 5

Motion for a resolution Amendment

5. Believes that there is no need for a complete revision of the well-functioning liability regimes but that the complexity, connectivity, opacity, vulnerability and autonomy of AI-systems nevertheless represent a significant challenge; considers that specific adjustments are necessary to

5. Believes that there is no need for a complete revision of the well-functioning liability regimes but that the complexity, connectivity, opacity, vulnerability modification through updates, self-learning and autonomy of AI-systems nevertheless represent a significant

PE652.460v01-00 26/134 AM\1205914EN.docx

EN

Page 27: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

avoid a situation in which persons who suffer harm or whose property is damaged end up without compensation;

challenge; considers that specific adjustments are necessary to avoid a situation in which persons who suffer material or non-material harm or damage whose property is damaged end up without compensation;

Or. en

Amendment 59Liesje Schreinemacher, Adrián Vázquez Lázara, Javier Nart, Karen Melchior, Svenja HahnMotion for a resolutionParagraph 5

Motion for a resolution Amendment

5. Believes that there is no need for a complete revision of the well-functioning liability regimes but that the complexity, connectivity, opacity, vulnerability and autonomy of AI-systems nevertheless represent a significant challenge; considers that specific adjustments are necessary to avoid a situation in which persons who suffer harm or whose property is damaged end up without compensation;

5. Believes that there is no need for a complete revision of the well-functioning liability regimes but that the complexity, connectivity, opacity, vulnerability and autonomy of AI-systems, as well as the multitude of actors involved, nevertheless represent a significant challenge; considers that specific adjustments are necessary to avoid a situation in which persons who suffer harm or whose property is damaged end up without compensation;

Or. en

Amendment 60Emil RadevMotion for a resolutionParagraph 5

Motion for a resolution Amendment

5. Believes that there is no need for a complete revision of the well-functioning liability regimes but that the complexity, connectivity, opacity, vulnerability and autonomy of AI-systems nevertheless represent a significant challenge; considers that specific adjustments are necessary to avoid a situation in which persons who suffer harm or whose property is damaged end up without compensation;

5. Believes that there is no need for a complete revision of the well-functioning liability regimes but that the complexity, connectivity, opacity, vulnerability and autonomy of AI-systems nevertheless represent a significant challenge; considers that specific adjustments, not only to the current legal framework, are necessary to avoid a situation in which persons who suffer harm or whose property is damaged

AM\1205914EN.docx 27/134 PE652.460v01-00

EN

Page 28: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

end up without compensation;

Or. bg

Amendment 61Emmanuel Maurel, Manon AubryMotion for a resolutionParagraph 5

Motion for a resolution Amendment

5. Believes that there is no need for a complete revision of the well-functioning liability regimes but that the complexity, connectivity, opacity, vulnerability and autonomy of AI-systems nevertheless represent a significant challenge; considers that specific adjustments are necessary to avoid a situation in which persons who suffer harm or whose property is damaged end up without compensation;

5. Believes that there is no need for a complete revision of the well-functioning liability regimes but that the complexity, connectivity, opacity, vulnerability and autonomy of AI-systems nevertheless represent a significant challenge; considers that specific adjustments are necessary to avoid a situation in which persons who suffer physical, psychological or mental harm or whose property is damaged end up without compensation;

Or. fr

Amendment 62Kosma ZłotowskiMotion for a resolutionParagraph 5

Motion for a resolution Amendment

5. Believes that there is no need for a complete revision of the well-functioning liability regimes but that the complexity, connectivity, opacity, vulnerability and autonomy of AI-systems nevertheless represent a significant challenge; considers that specific adjustments are necessary to avoid a situation in which persons who suffer harm or whose property is damaged end up without compensation;

5. Believes that the complexity, connectivity, opacity, vulnerability and autonomy of AI-systems nevertheless represent a significant challenge; considers that specific adjustments of the liability regimes are necessary to avoid a situation in which persons who suffer harm or whose property is damaged end up without compensation;

Or. en

Amendment 63Sergey LagodinskyMotion for a resolution

PE652.460v01-00 28/134 AM\1205914EN.docx

EN

Page 29: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

Paragraph 5

Motion for a resolution Amendment

5. Believes that there is no need for a complete revision of the well-functioning liability regimes but that the complexity, connectivity, opacity, vulnerability and autonomy of AI-systems nevertheless represent a significant challenge; considers that specific adjustments are necessary to avoid a situation in which persons who suffer harm or whose property is damaged end up without compensation;

5. Believes that there is no need for a complete revision of the well-functioning liability regimes but that the complexity, connectivity, opacity, vulnerability and potential autonomy of AI-systems nevertheless represent a significant challenge; considers that specific adjustments are necessary to avoid a situation in which persons who suffer harm or whose property is damaged end up without compensation;

Or. en

Amendment 64Evelyne Gebhardt, Brando BenifeiMotion for a resolutionParagraph 6

Motion for a resolution Amendment

6. Notes that all physical or virtual activities, devices or processes that are driven by AI-systems may technically be the direct or indirect cause of harm or damage, yet are always the result of someone building, deploying or interfering with the systems; is of the opinion that the opacity and autonomy of AI-systems could make it in practice very difficult or even impossible to trace back specific harmful actions of the AI-systems to specific human input or to decisions in the design; recalls that, in accordance with widely-accepted liability concepts, one is nevertheless able to circumvent this obstacle by making the persons who create, maintain or control the risk associated with the AI-system, accountable;

6. Notes that all physical or virtual activities, devices or processes that are driven by AI-systems may technically be the direct or indirect cause of harm or damage, yet are always the result of someone building, deploying or interfering with the systems; is of the opinion that the opacity and autonomy of AI-systems could make it in practice very difficult or even impossible to trace back specific harmful actions of the AI-systems to specific human input or to decisions in the design; recalls that this constraint has an even greater impact on the affected person for whom it is impossible to establish causality between the damage and a prior act or omission; stresses that, in accordance with widely-accepted liability concepts, one is nevertheless able to circumvent this obstacle by making the persons who create, maintain or control the risk associated with the AI-system, accountable;

AM\1205914EN.docx 29/134 PE652.460v01-00

EN

Page 30: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

Or. en

Amendment 65Liesje Schreinemacher, Adrián Vázquez Lázara, Javier Nart, Karen Melchior, Svenja HahnMotion for a resolutionParagraph 6

Motion for a resolution Amendment

6. Notes that all physical or virtual activities, devices or processes that are driven by AI-systems may technically be the direct or indirect cause of harm or damage, yet are always the result of someone building, deploying or interfering with the systems; is of the opinion that the opacity and autonomy of AI-systems could make it in practice very difficult or even impossible to trace back specific harmful actions of the AI-systems to specific human input or to decisions in the design; recalls that, in accordance with widely-accepted liability concepts, one is nevertheless able to circumvent this obstacle by making the persons who create, maintain or control the risk associated with the AI-system, accountable;

6. Notes that all physical or virtual activities, devices or processes that are driven by AI-systems may technically be the direct or indirect cause of harm or damage, yet are always the result of someone building, deploying or interfering with the systems; notes in this respect that it is not necessary to give legal personality to AI-systems; is of the opinion that the opacity and autonomy of AI-systems could make it in practice very difficult or even impossible to trace back specific harmful actions of the AI-systems to specific human input or to decisions in the design; recalls that, in accordance with widely-accepted liability concepts, one is nevertheless able to circumvent this obstacle by making the persons who create, maintain or control the risk associated with the AI-system, accountable;

Or. en

Amendment 66Emil RadevMotion for a resolutionParagraph 6

Motion for a resolution Amendment

6. Notes that all physical or virtual activities, devices or processes that are driven by AI-systems may technically be the direct or indirect cause of harm or damage, yet are always the result of someone building, deploying or interfering with the systems; is of the opinion that the opacity and autonomy of AI-systems could make it in practice very difficult or even

6. Notes that all physical or virtual activities, devices or processes that are driven by AI-systems may technically be the direct or indirect cause of harm or damage, yet are always the result of someone building, deploying or interfering with the systems; is of the opinion that the opacity, connectivity and autonomy of AI-systems could make it in practice very

PE652.460v01-00 30/134 AM\1205914EN.docx

EN

Page 31: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

impossible to trace back specific harmful actions of the AI-systems to specific human input or to decisions in the design; recalls that, in accordance with widely-accepted liability concepts, one is nevertheless able to circumvent this obstacle by making the persons who create, maintain or control the risk associated with the AI-system, accountable;

difficult or even impossible to trace back specific harmful actions of the AI-systems to specific human input or to decisions in the design; recalls that, in accordance with widely-accepted liability concepts, it is necessary to take into account the complexity of value chains; considers that one is nevertheless able to circumvent this obstacle by making the persons who create, maintain or control the risk associated with the AI-system, accountable;

Or. bg

Amendment 67Sergey LagodinskyMotion for a resolutionParagraph 6

Motion for a resolution Amendment

6. Notes that all physical or virtual activities, devices or processes that are driven by AI-systems may technically be the direct or indirect cause of harm or damage, yet are always the result of someone building, deploying or interfering with the systems; is of the opinion that the opacity and autonomy of AI-systems could make it in practice very difficult or even impossible to trace back specific harmful actions of the AI-systems to specific human input or to decisions in the design; recalls that, in accordance with widely-accepted liability concepts, one is nevertheless able to circumvent this obstacle by making the persons who create, maintain or control the risk associated with the AI-system, accountable;

6. Notes that all physical or virtual activities, devices or processes that are driven by AI-systems may technically be the direct or indirect cause of harm or damage, yet are always the result of someone building, deploying or interfering with the systems; is of the opinion that AI-systems designed to be auditable and open for the agency of a human operator at any time can support the possibility to trace back specific harmful actions of the AI-systems to specific input or to decisions in the design; recalls that, in accordance with widely-accepted liability concepts, obstacles to this can nevertheless be circumvented by making the persons who create, maintain or control the risk associated with the AI-system, accountable;

Or. en

Amendment 68Kosma ZłotowskiMotion for a resolutionParagraph 6

AM\1205914EN.docx 31/134 PE652.460v01-00

EN

Page 32: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

Motion for a resolution Amendment

6. Notes that all physical or virtual activities, devices or processes that are driven by AI-systems may technically be the direct or indirect cause of harm or damage, yet are always the result of someone building, deploying or interfering with the systems; is of the opinion that the opacity and autonomy of AI-systems could make it in practice very difficult or even impossible to trace back specific harmful actions of the AI-systems to specific human input or to decisions in the design; recalls that, in accordance with widely-accepted liability concepts, one is nevertheless able to circumvent this obstacle by making the persons who create, maintain or control the risk associated with the AI-system, accountable;

6. Notes that all physical or virtual activities, devices or processes that are driven by AI-systems may technically be the direct or indirect cause of harm or damage, yet are overwhelmingly the result of someone building, deploying or interfering with the systems; is of the opinion that the opacity and autonomy of AI-systems could make it in practice very difficult or even impossible to trace back specific harmful actions of the AI-systems to specific human input or to decisions in the design; recalls that, in accordance with widely-accepted liability concepts, one is nevertheless able to circumvent this obstacle by making the persons who create, maintain or control the risk associated with the AI-system, accountable;

Or. en

Amendment 69Liesje Schreinemacher, Adrián Vázquez Lázara, Javier Nart, Karen Melchior, Svenja HahnMotion for a resolutionParagraph 7

Motion for a resolution Amendment

7. Considers that the Product Liability Directive (PLD) has proven to be an effective means of getting compensation for harm triggered by a defective product; hence, notes that it should also be used with regard to civil liability claims against the producer of a defective AI-system, when the AI-system qualifies as a product under that Directive; if legislative adjustments to the PLD are necessary, they should be discussed during a review of that Directive; is of the opinion that, for the purpose of legal certainty throughout the Union, the ‘backend operator’ should fall under the same liability rules as the producer, manufacturer and developer;

7. Considers that the Product Liability Directive (PLD) has proven to be an effective means of getting compensation for harm triggered by a defective product; hence, notes that it should also be used with regard to civil liability claims against the producer of a defective AI-system, when the AI-system qualifies as a product under that Directive; Legislative adjustments to the PLD are necessary and they should be discussed during a review of that Directive;

PE652.460v01-00 32/134 AM\1205914EN.docx

EN

Page 33: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

Or. en

Amendment 70Sergey LagodinskyMotion for a resolutionParagraph 7

Motion for a resolution Amendment

7. Considers that the Product Liability Directive (PLD) has proven to be an effective means of getting compensation for harm triggered by a defective product; hence, notes that it should also be used with regard to civil liability claims against the producer of a defective AI-system, when the AI-system qualifies as a product under that Directive; if legislative adjustments to the PLD are necessary, they should be discussed during a review of that Directive; is of the opinion that, for the purpose of legal certainty throughout the Union, the ‘backend operator’ should fall under the same liability rules as the producer, manufacturer and developer;

7. Considers that the Product Liability Directive (PLD) has proven to be an effective means of getting compensation for harm triggered by a defective product; hence, notes that it should also be used with regard to civil liability claims against the producer of a defective AI-system, when the AI-system qualifies as a product under that Directive; if legislative adjustments to the PLD are necessary, they should be discussed during a review of that Directive; is of the opinion that, for the purpose of legal certainty throughout the Union, the ‘backend operator’ should fall under the same liability rules as the producer, manufacturer and developer, notwithstanding its proportionate liability according to their contribution of risk to the harm regulated under these provisions;

Or. en

Amendment 71Marion WalsmannMotion for a resolutionParagraph 7

Motion for a resolution Amendment

7. Considers that the Product Liability Directive (PLD) has proven to be an effective means of getting compensation for harm triggered by a defective product; hence, notes that it should also be used with regard to civil liability claims against the producer of a defective AI-system, when the AI-system qualifies as a product under that Directive; if legislative adjustments to the PLD are necessary,

7. Considers that the Product Liability Directive (PLD) should be reviewed, and that this review should include adjustments of the definitions of product and defect in order to allow civil liability claims against the producer of a defective AI-system; legislative adjustments to the PLD should be discussed during a review of that Directive; is of the opinion that, for the purpose of legal certainty throughout

AM\1205914EN.docx 33/134 PE652.460v01-00

EN

Page 34: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

they should be discussed during a review of that Directive; is of the opinion that, for the purpose of legal certainty throughout the Union, the ‘backend operator’ should fall under the same liability rules as the producer, manufacturer and developer;

the Union, the ‘backend operator’ should fall under the same liability rules as the producer, manufacturer and developer;

Or. de

Amendment 72Evelyne Gebhardt, Brando BenifeiMotion for a resolutionParagraph 7

Motion for a resolution Amendment

7. Considers that the Product Liability Directive (PLD) has proven to be an effective means of getting compensation for harm triggered by a defective product; hence, notes that it should also be used with regard to civil liability claims against the producer of a defective AI-system, when the AI-system qualifies as a product under that Directive; if legislative adjustments to the PLD are necessary, they should be discussed during a review of that Directive; is of the opinion that, for the purpose of legal certainty throughout the Union, the ‘backend operator’ should fall under the same liability rules as the producer, manufacturer and developer;

7. Considers that the Product Liability Directive (PLD) has for over 30 years proven to be an effective means of getting compensation for harm triggered by a defective product; hence, notes that because it should also be used with regard to civil liability claims against the producer of a defective AI-system, the Directive needs to be updated to include AI systems; underlines that legislative adjustments to the PLD are necessary, they should be discussed during a review of that Directive; notes that the ‘backend operator’ does not necessarily coincide with the producer, as it can also be the developer or programmer and, therefore, the ‘backend operator’ can be justifiably covered by a different liability regime than that provided by the PLD;

Or. en

Amendment 73Daniel BudaMotion for a resolutionParagraph 7

Motion for a resolution Amendment

7. Considers that the Product Liability Directive (PLD) has proven to be an effective means of getting compensation for harm triggered by a defective product;

7. Considers that the Product Liability Directive (PLD) has proven to be an effective means of getting compensation for harm triggered by a defective product;

PE652.460v01-00 34/134 AM\1205914EN.docx

EN

Page 35: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

hence, notes that it should also be used with regard to civil liability claims against the producer of a defective AI-system, when the AI-system qualifies as a product under that Directive; if legislative adjustments to the PLD are necessary, they should be discussed during a review of that Directive; is of the opinion that, for the purpose of legal certainty throughout the Union, the ‘backend operator’ should fall under the same liability rules as the producer, manufacturer and developer;

hence, notes that it should also be used with regard to civil liability claims against the producer of a defective AI-system, when the AI-system qualifies as a product under that Directive; if legislative adjustments to the PLD are necessary, they should be discussed during a review of that Directive, taking account of new civil liability standards for AI systems; is of the opinion that, for the purpose of legal certainty throughout the Union, the ‘backend operator’ should fall under the same liability rules as the producer, manufacturer and developer;

Or. ro

Amendment 74Sergey LagodinskyMotion for a resolutionParagraph 8

Motion for a resolution Amendment

8. Considers that the existing fault-based tort law of the Member States offers in most cases a sufficient level of protection for persons that suffer harm caused by an interfering third person like a hacker or whose property is damaged by such a third person, as the interference regularly constitutes a fault-based action; notes that only for cases in which the third person is untraceable or impecunious, additional liability rules seem necessary;

8. Considers that the existing fault-based tort law of the Member States offers in most cases a sufficient level of protection for persons that suffer harm caused by an interfering third person like a hacker or whose property is damaged by such a third person, as the interference regularly constitutes a fault-based action; notes that only for cases in which the third person is untraceable or impecunious, additional liability rules seem necessary; nuances, however, that, this is notwithstanding a malicious intent or gross negligence on behalf of the user of the application, and it must be accounted for in addition to the strict liability of operator or manufacturer;

Or. en

Amendment 75Jiří PospíšilMotion for a resolutionParagraph 8

AM\1205914EN.docx 35/134 PE652.460v01-00

EN

Page 36: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

Motion for a resolution Amendment

8. Considers that the existing fault-based tort law of the Member States offers in most cases a sufficient level of protection for persons that suffer harm caused by an interfering third person like a hacker or whose property is damaged by such a third person, as the interference regularly constitutes a fault-based action; notes that only for cases in which the third person is untraceable or impecunious, additional liability rules seem necessary;

8. Considers that the existing fault-based tort law of the Member States offers in most cases a sufficient level of protection for persons that suffer harm caused by an interfering third person like a hacker or whose property is damaged by such a third person, as the interference regularly constitutes a fault-based action; notes that only for cases in which the third person is untraceable or impecunious, or cases in which it would be disproportionately difficult for the injured party to bear the burden of proof, additional liability rules seem necessary;

Or. cs

Amendment 76Evelyne Gebhardt, Brando BenifeiMotion for a resolutionParagraph 8

Motion for a resolution Amendment

8. Considers that the existing fault-based tort law of the Member States offers in most cases a sufficient level of protection for persons that suffer harm caused by an interfering third person like a hacker or whose property is damaged by such a third person, as the interference regularly constitutes a fault-based action; notes that only for cases in which the third person is untraceable or impecunious, additional liability rules seem necessary;

8. Considers that the existing fault-based tort law of the Member States offers in most cases a sufficient level of protection for persons that suffer harm caused by an interfering third person, as the interference regularly constitutes a fault-based action; notes that due to these characteristics of AI-systems, additional liability rules seem necessary;

Or. en

Amendment 77Evelyne Gebhardt, Brando BenifeiMotion for a resolutionParagraph 9

Motion for a resolution Amendment

9. Considers it, therefore, appropriate 9. Considers it, therefore, appropriate

PE652.460v01-00 36/134 AM\1205914EN.docx

EN

Page 37: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

for this report to focus on civil liability claims against the deployer of an AI-system; affirms that the deployer’s liability is justified by the fact that he or she is controlling a risk associated with the AI-system, comparable to an owner of a car or pet; considers that due to the AI-system’s complexity and connectivity, the deployer will be in many cases the first visible contact point for the affected person;

for this report to focus on civil liability claims against the frontend operator and backend operator of an AI-system; affirms that the frontend operator's liability is justified by the fact that he or she is benefitting from the use of the AI-system, comparable to an owner of a car or pet; considers that due to the AI-system’s complexity and connectivity, the frontend operator will be in many cases the first visible contact point for the affected person; conversely, believes that the liability of the backend operator under this Regulation is based on the fact that he or she is the person continuously defining the features of the relevant technology and providing essential and ongoing backend support, therefore holding the actual control over the risks of the operation;

Or. en

Amendment 78Liesje Schreinemacher, Adrián Vázquez Lázara, Javier Nart, Karen MelchiorMotion for a resolutionParagraph 9

Motion for a resolution Amendment

9. Considers it, therefore, appropriate for this report to focus on civil liability claims against the deployer of an AI-system; affirms that the deployer’s liability is justified by the fact that he or she is controlling a risk associated with the AI-system, comparable to an owner of a car or pet; considers that due to the AI-system’s complexity and connectivity, the deployer will be in many cases the first visible contact point for the affected person;

9. Considers it, therefore, appropriate for this report to focus on civil liability claims against the operator of an AI-system; affirms that the operator’s liability is justified by the fact that he or she is controlling the risks associated with the AI-system, comparable to an owner of a car or pet;

Or. en

Amendment 79Sergey LagodinskyMotion for a resolutionParagraph 9

AM\1205914EN.docx 37/134 PE652.460v01-00

EN

Page 38: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

Motion for a resolution Amendment

9. Considers it, therefore, appropriate for this report to focus on civil liability claims against the deployer of an AI-system; affirms that the deployer’s liability is justified by the fact that he or she is controlling a risk associated with the AI-system, comparable to an owner of a car or pet; considers that due to the AI-system’s complexity and connectivity, the deployer will be in many cases the first visible contact point for the affected person;

9. Considers it, therefore, appropriate for this report to focus on civil liability claims against the operator of an AI-system; affirms that the operator’s liability is justified by the fact that he or she is controlling a risk associated with the AI-system, comparable to an owner of a car or pet; considers that due to the AI-system’s complexity and connectivity, the operator will be in many cases the first visible contact point for the affected person

Or. en

Amendment 80Evelyne Gebhardt, Brando BenifeiMotion for a resolutionSubheading 3

Motion for a resolution Amendment

Liability of the deployer Liability of the frontend and backend operator

Or. en

Amendment 81Liesje Schreinemacher, Adrián Vázquez Lázara, Javier Nart, Karen MelchiorMotion for a resolutionSubheading 3

Motion for a resolution Amendment

Liability of the deployer Liability of the operator

Or. en

Amendment 82Sergey LagodinskyMotion for a resolutionSubheading 3

Motion for a resolution Amendment

Liability of the deployer Liability of the operator

PE652.460v01-00 38/134 AM\1205914EN.docx

EN

Page 39: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

Or. en

Amendment 83Daniel BudaMotion for a resolutionParagraph 10

Motion for a resolution Amendment

10. Opines that liability rules involving the deployer should in principle cover all operations of AI-systems, no matter where the operation takes place and whether it happens physically or virtually; remarks that operations in public spaces that expose many third persons to a risk constitute, however, cases that require further consideration; considers that the potential victims of harm or damage are often not aware of the operation and regularly do not have contractual liability claims against the deployer; notes that when harm or damage materialises, such third persons would then only have a fault-liability claim, and they might find it difficult to prove the fault of the deployer of the AI-system;

(10) Opines that liability rules involving the deployer should in principle cover all operations of AI-systems, no matter where the operation takes place and whether it happens physically or virtually; remarks that operations in public spaces that expose many third persons to a risk constitute, however, cases that require further consideration, especially where those possibly affected would not normally have a contractual relationship with the operator; considers that the potential victims of harm or damage are often not aware of the operation and regularly do not have contractual liability claims against the deployer; notes that when harm or damage materialises, such third persons would then only have a fault-liability claim, and they might find it difficult to prove the fault of the deployer of the AI-system;

Or. ro

Amendment 84Sergey LagodinskyMotion for a resolutionParagraph 10

Motion for a resolution Amendment

10. Opines that liability rules involving the deployer should in principle cover all operations of AI-systems, no matter where the operation takes place and whether it happens physically or virtually; remarks that operations in public spaces that expose many third persons to a risk constitute, however, cases that require further consideration; considers that the potential victims of harm or damage are often not

10. Opines that liability rules involving the operator should in principle cover all operations of AI-systems, no matter where the operation takes place and whether it happens physically or virtually; remarks that operations in public spaces that expose many third persons to a risk, however, require further consideration, while not overlooking other kinds of risk to be potentially caused by AI-systems;

AM\1205914EN.docx 39/134 PE652.460v01-00

EN

Page 40: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

aware of the operation and regularly do not have contractual liability claims against the deployer; notes that when harm or damage materialises, such third persons would then only have a fault-liability claim, and they might find it difficult to prove the fault of the deployer of the AI-system;

considers that the potential victims of harm or damage are often not aware of the operation and regularly do not have contractual liability claims against the operator; notes that when harm or damage materialises, such third persons would then only have a fault-liability claim, and they might find it difficult to prove the fault of the operator of the AI-system;

Or. en

Amendment 85Marion WalsmannMotion for a resolutionParagraph 10

Motion for a resolution Amendment

10. Opines that liability rules involving the deployer should in principle cover all operations of AI-systems, no matter where the operation takes place and whether it happens physically or virtually; remarks that operations in public spaces that expose many third persons to a risk constitute, however, cases that require further consideration; considers that the potential victims of harm or damage are often not aware of the operation and regularly do not have contractual liability claims against the deployer; notes that when harm or damage materialises, such third persons would then only have a fault-liability claim, and they might find it difficult to prove the fault of the deployer of the AI-system;

10. Opines that liability rules involving the deployer should in principle cover all operations of AI-systems, no matter where the operation takes place and whether it happens physically or virtually; remarks that operations in public spaces that expose many third persons to a risk constitute, however, cases that require further consideration; considers that the potential victims of harm or damage are often not aware of the operation and regularly do not have contractual liability claims against the deployer; notes that when harm or damage materialises, such third persons would then only have a fault-liability claim, and they might find it difficult to prove the fault of the deployer of the AI-system and thus corresponding liability claims might fail;

Or. de

Amendment 86Evelyne Gebhardt, Brando BenifeiMotion for a resolutionParagraph 10

Motion for a resolution Amendment

10. Opines that liability rules involving the deployer should in principle cover all

10. Opines that liability rules involving the frontend and backend operator should

PE652.460v01-00 40/134 AM\1205914EN.docx

EN

Page 41: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

operations of AI-systems, no matter where the operation takes place and whether it happens physically or virtually; remarks that operations in public spaces that expose many third persons to a risk constitute, however, cases that require further consideration; considers that the potential victims of harm or damage are often not aware of the operation and regularly do not have contractual liability claims against the deployer; notes that when harm or damage materialises, such third persons would then only have a fault-liability claim, and they might find it difficult to prove the fault of the deployer of the AI-system;

in principle cover all operations of AI-systems, no matter where the operation takes place and whether it happens physically or virtually; considers that the potential victims of harm or damage are often not aware of the operation and regularly do not have contractual liability claims against the frontend and backend operators; notes that when harm or damage materialises, such third persons would then only have a fault-liability claim, and they might find it difficult to prove the fault of the frontend or backend operator of the AI-system;

Or. en

Amendment 87Liesje Schreinemacher, Adrián Vázquez Lázara, Javier Nart, Karen MelchiorMotion for a resolutionParagraph 10

Motion for a resolution Amendment

10. Opines that liability rules involving the deployer should in principle cover all operations of AI-systems, no matter where the operation takes place and whether it happens physically or virtually; remarks that operations in public spaces that expose many third persons to a risk constitute, however, cases that require further consideration; considers that the potential victims of harm or damage are often not aware of the operation and regularly do not have contractual liability claims against the deployer; notes that when harm or damage materialises, such third persons would then only have a fault-liability claim, and they might find it difficult to prove the fault of the deployer of the AI-system;

10. Opines that liability rules involving the operator should in principle cover all operations of AI-systems, no matter where the operation takes place and whether it happens physically or virtually; remarks that operations in public spaces that expose many third persons to a risk constitute, however, cases that require further consideration; considers that the potential victims of harm or damage are often not aware of the operation and regularly do not have contractual liability claims against the operator; notes that when harm or damage materialises, such third persons would then only have a fault-liability claim, and they might find it difficult to prove the fault of the operator of the AI-system;

Or. en

Amendment 88Emmanuel Maurel, Manon Aubry

AM\1205914EN.docx 41/134 PE652.460v01-00

EN

Page 42: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

Motion for a resolutionParagraph 10

Motion for a resolution Amendment

10. Opines that liability rules involving the deployer should in principle cover all operations of AI-systems, no matter where the operation takes place and whether it happens physically or virtually; remarks that operations in public spaces that expose many third persons to a risk constitute, however, cases that require further consideration; considers that the potential victims of harm or damage are often not aware of the operation and regularly do not have contractual liability claims against the deployer; notes that when harm or damage materialises, such third persons would then only have a fault-liability claim, and they might find it difficult to prove the fault of the deployer of the AI-system;

10. Opines that liability rules involving the deployer should cover all operations of AI-systems, no matter where the operation takes place and whether it happens physically or virtually; remarks that operations in public spaces that expose many third persons to a risk constitute, however, cases that require further consideration; considers that the potential victims of harm or damage are often not aware of the operation and regularly do not have contractual liability claims against the deployer; notes that when harm or damage materialises, such third persons would then only have a fault-liability claim, and they might find it difficult to prove the fault of the deployer of the AI-system;

Or. fr

Amendment 89Gilles Lebreton, Gunnar Beck, Jean-Paul Garraud, Luisa RegimentiMotion for a resolutionParagraph 11

Motion for a resolution Amendment

11. Considers it appropriate to define the deployer as the person who decides on the use of the AI-system, who exercises control over the risk and who benefits from its operation; considers that exercising control means any action of the deployer that affects the manner of the operation from start to finish or that changes specific functions or processes within the AI-system;

11. Considers it appropriate to define the deployer as the person who decides on the use of the AI-system, who exercises control over the risk and who benefits from its operation; considers that exercising control means any action of the deployer that affects the manner of the operation from start to finish or that changes specific functions or processes within the AI-system; takes the view that those tasked with deployment should monitor the good intentions of the developers throughout the value chain in order to ensure the protection of consumers through trustworthy AI;

PE652.460v01-00 42/134 AM\1205914EN.docx

EN

Page 43: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

Or. fr

Amendment 90Liesje Schreinemacher, Adrián Vázquez Lázara, Javier Nart, Karen MelchiorMotion for a resolutionParagraph 11

Motion for a resolution Amendment

11. Considers it appropriate to define the deployer as the person who decides on the use of the AI-system, who exercises control over the risk and who benefits from its operation; considers that exercising control means any action of the deployer that affects the manner of the operation from start to finish or that changes specific functions or processes within the AI-system;

11. Considers it appropriate to define the operator as the person who exercises a degree of control over a risk connected with the operation and functioning of the AI-system and benefits from its operation; considers that exercising control means any action of the operator that influences the operation of the AI-system and thus the extent to which it exposes third parties to its potential risks; considers that these actions could impact the operation from start to finish by determining the input, output or results, or change specific functions or processes within the AI-system;

Or. en

Amendment 91Emmanuel Maurel, Manon AubryMotion for a resolutionParagraph 11

Motion for a resolution Amendment

11. Considers it appropriate to define the deployer as the person who decides on the use of the AI-system, who exercises control over the risk and who benefits from its operation; considers that exercising control means any action of the deployer that affects the manner of the operation from start to finish or that changes specific functions or processes within the AI-system;

11. Considers it appropriate to define the deployer as the natural or legal person involved in placing on the market the AI system or making it available to users, safeguarding its availability or management and monitoring the system; considers that exercising control means any action of the deployer that affects the manner of the operation from start to finish or that changes specific functions or processes within the AI-system;

Or. fr

Amendment 92

AM\1205914EN.docx 43/134 PE652.460v01-00

EN

Page 44: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

Evelyne Gebhardt, Brando BenifeiMotion for a resolutionParagraph 11

Motion for a resolution Amendment

11. Considers it appropriate to define the deployer as the person who decides on the use of the AI-system, who exercises control over the risk and who benefits from its operation; considers that exercising control means any action of the deployer that affects the manner of the operation from start to finish or that changes specific functions or processes within the AI-system;

11. Considers it appropriate to define the frontend operator as the person who benefits from the use of the AI-system, who enjoys its features and functions; considers that the backend operator is defined as the person continuously defining the features of the relevant technology and providing essential and ongoing backend support, therefore holding the actual control over the risks of the operation;

Or. en

Amendment 93Sergey LagodinskyMotion for a resolutionParagraph 11

Motion for a resolution Amendment

11. Considers it appropriate to define the deployer as the person who decides on the use of the AI-system, who exercises control over the risk and who benefits from its operation; considers that exercising control means any action of the deployer that affects the manner of the operation from start to finish or that changes specific functions or processes within the AI-system;

11. Considers it appropriate to define the operator as the person who exercises control over the risk and who benefits from its operation; considers that exercising control means any action of the operator that affects the manner of the operation from start to finish or that changes specific functions or processes within the AI-system;

Or. en

Amendment 94Liesje Schreinemacher, Adrián Vázquez Lázara, Javier Nart, Karen MelchiorMotion for a resolutionParagraph 12

Motion for a resolution Amendment

12. Notes that there could be situations in which there is more than one deployer;

12. Notes that there could be situations in which there is more than one operator,

PE652.460v01-00 44/134 AM\1205914EN.docx

EN

Page 45: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

considers that in that event, all deployers should be jointly and severally liable while having the right to recourse proportionally against each other;

for example a backend and frontend operator; considers that in that event, all operators should be jointly and severally liable while having the right to recourse proportionally against each other, reflecting the level of control each party has over the materialized risk;

Or. en

Amendment 95Daniel BudaMotion for a resolutionParagraph 12

Motion for a resolution Amendment

12. Notes that there could be situations in which there is more than one deployer; considers that in that event, all deployers should be jointly and severally liable while having the right to recourse proportionally against each other;

12. Notes that there could be situations in which there is more than one deployer; considers that in that event, all deployers should be jointly and severally liable while having the right to recourse proportionally against each other in line with the level of operational risk control;

Or. ro

Amendment 96Sergey LagodinskyMotion for a resolutionParagraph 12

Motion for a resolution Amendment

12. Notes that there could be situations in which there is more than one deployer; considers that in that event, all deployers should be jointly and severally liable while having the right to recourse proportionally against each other;

12. Notes that there could be situations in which there is more than one operator; considers that in that event, all operators and, if applicable, users should be jointly and severally liable while having the right to recourse proportionally against each other;

Or. en

Amendment 97Evelyne Gebhardt, Brando BenifeiMotion for a resolutionParagraph 12

AM\1205914EN.docx 45/134 PE652.460v01-00

EN

Page 46: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

Motion for a resolution Amendment

12. Notes that there could be situations in which there is more than one deployer; considers that in that event, all deployers should be jointly and severally liable while having the right to recourse proportionally against each other;

12. Notes that there could be situations in which there are several frontend and backend operators; considers that in that event, all of them should be jointly and severally liable while having the right to recourse proportionally against each other;

Or. en

Amendment 98Kosma ZłotowskiMotion for a resolutionParagraph 12 a (new)

Motion for a resolution Amendment

12a. Notes that giving legal personality to artificial intelligence is neither necessary nor desirable, as the damage caused by the functioning of this new technology can and should be attributed to the responsible producer or deployer;

Or. en

Amendment 99Evelyne Gebhardt, Brando BenifeiMotion for a resolutionSubheading 4

Motion for a resolution Amendment

Different liability rules for different risks No risk-based approach in the liability phase

Or. en

Amendment 100Evelyne Gebhardt, Brando BenifeiMotion for a resolutionParagraph 13

Motion for a resolution Amendment

13. Recognises that the type of AI-system the deployer is exercising control

13. Recalls that in the liability stage, a risk-based approach to AI is not

PE652.460v01-00 46/134 AM\1205914EN.docx

EN

Page 47: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

over is a determining factor; notes that an AI-system that entails a high risk potentially endangers the general public to a much higher degree; considers that, based on the legal challenges that AI-systems pose to the existing liability regimes, it seems reasonable to set up a strict liability regime for those high-risk AI-systems;

appropriate, since the damage has occurred and the product has proven to be a risk product; notes that so called low-risk applications may equally cause severe harm or damage;

Or. en

Amendment 101Daniel BudaMotion for a resolutionParagraph 13

Motion for a resolution Amendment

13. Recognises that the type of AI-system the deployer is exercising control over is a determining factor; notes that an AI-system that entails a high risk potentially endangers the general public to a much higher degree; considers that, based on the legal challenges that AI-systems pose to the existing liability regimes, it seems reasonable to set up a strict liability regime for those high-risk AI-systems;

13. Recognises that the type of AI-system the deployer is exercising control over is a determining factor regarding liability; notes that an AI-system that entails a high inherent risk potentially endangers the general public to a much higher degree; considers that, based on the legal challenges that AI-systems pose to the existing liability regimes, it seems reasonable to set up a strict liability regime for those high-risk AI-systems;

Or. ro

Amendment 102Sergey LagodinskyMotion for a resolutionParagraph 13

Motion for a resolution Amendment

13. Recognises that the type of AI-system the deployer is exercising control over is a determining factor; notes that an AI-system that entails a high risk potentially endangers the general public to a much higher degree; considers that, based on the legal challenges that AI-systems pose to the existing liability regimes, it seems reasonable to set up a strict liability

13. Recognises that the type of AI-system the operator is exercising control over is a determining factor; notes that an AI-system that entails a high risk and act autonomously potentially endangers the general public to a much higher degree; considers that, based on the legal challenges that AI-systems pose to the existing liability regimes, it seems

AM\1205914EN.docx 47/134 PE652.460v01-00

EN

Page 48: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

regime for those high-risk AI-systems; reasonable to set up a strict liability regime for those high-risk autonomous AI-systems;

Or. en

Amendment 103Emil RadevMotion for a resolutionParagraph 13

Motion for a resolution Amendment

13. Recognises that the type of AI-system the deployer is exercising control over is a determining factor; notes that an AI-system that entails a high risk potentially endangers the general public to a much higher degree; considers that, based on the legal challenges that AI-systems pose to the existing liability regimes, it seems reasonable to set up a strict liability regime for those high-risk AI-systems;

13. Recognises that the type of AI-system the deployer is exercising control over is a determining factor; notes that an AI-system that entails a high risk potentially endangers the general public to a much higher degree; considers that, based on the legal challenges that AI-systems pose to the existing civil liability regimes, it seems reasonable to set up a common strict liability regime for those high-risk AI-systems;

Or. bg

Amendment 104Liesje Schreinemacher, Adrián Vázquez Lázara, Javier Nart, Karen MelchiorMotion for a resolutionParagraph 13

Motion for a resolution Amendment

13. Recognises that the type of AI-system the deployer is exercising control over is a determining factor; notes that an AI-system that entails a high risk potentially endangers the general public to a much higher degree; considers that, based on the legal challenges that AI-systems pose to the existing liability regimes, it seems reasonable to set up a strict liability regime for those high-risk AI-systems;

13. Recognises that the type of AI-system the operator is exercising control over is a determining factor; notes that an AI-system that entails a high risk potentially endangers the general public to a much higher degree; considers that, based on the legal challenges that AI-systems pose to the existing liability regimes, it seems reasonable to set up a strict liability regime for those high-risk AI-systems;

Or. en

Amendment 105

PE652.460v01-00 48/134 AM\1205914EN.docx

EN

Page 49: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

Evelyne Gebhardt, Brando BenifeiMotion for a resolutionParagraph 14

Motion for a resolution Amendment

14. Believes that an AI-system presents a high risk when its autonomous operation involves a significant potential to cause harm to one or more persons, in a manner that is random and impossible to predict in advance; considers that the significance of the potential depends on the interplay between the severity of possible harm, the likelihood that the risk materializes and the manner in which the AI-system is being used;

14. Stresses that the liability model for products containing AI systems has to be approached in a two-step process: firstly providing a fault based liability of the frontend operator against which the affected person should have the right to bring the claim for damages, with the possibility for the frontend operator to prove his lack of fault by complying with the duty of care consisting in the regular installation of all available updates; if this obligation is fulfilled, due diligence is presumed; secondly, in the event where no fault of the frontend operator can be established, the backend operator should be held strictly liable; notes that such a two-step process is essential in order to ensure that victims are effectively compensated for damages caused by AI driven systems;

Or. en

Amendment 106Emil RadevMotion for a resolutionParagraph 14

Motion for a resolution Amendment

14. Believes that an AI-system presents a high risk when its autonomous operation involves a significant potential to cause harm to one or more persons, in a manner that is random and impossible to predict in advance; considers that the significance of the potential depends on the interplay between the severity of possible harm, the likelihood that the risk materializes and the manner in which the AI-system is being used;

14. Believes that an AI-system presents a high risk when its autonomous operation involves a significant potential to cause harm to one or more persons, in a manner that is random and impossible to predict in advance; considers that when determining whether an AI-system is high-risk, the sector in which significant risks can be expected to arise and the nature of the activities undertaken must also be taken into account; considers that the significance of the potential depends on the interplay between the severity of possible

AM\1205914EN.docx 49/134 PE652.460v01-00

EN

Page 50: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

harm, the likelihood that the risk materializes and the manner in which the AI-system is being used;

Or. bg

Amendment 107Sergey LagodinskyMotion for a resolutionParagraph 14

Motion for a resolution Amendment

14. Believes that an AI-system presents a high risk when its autonomous operation involves a significant potential to cause harm to one or more persons, in a manner that is random and impossible to predict in advance; considers that the significance of the potential depends on the interplay between the severity of possible harm, the likelihood that the risk materializes and the manner in which the AI-system is being used;

14. Believes that an AI-system presents a high risk when its autonomous operation involves a significant potential to cause harm to one or more persons, in a manner that is based on an autonomous decision-making of the technology and thus impossible to predict in advance; considers that the significance of the potential depends on the interplay between the severity of possible harm, the likelihood that the risk materializes and the manner in which the AI-system is being used;

Or. en

Amendment 108Liesje Schreinemacher, Adrián Vázquez Lázara, Javier Nart, Karen Melchior, Svenja HahnMotion for a resolutionParagraph 14

Motion for a resolution Amendment

14. Believes that an AI-system presents a high risk when its autonomous operation involves a significant potential to cause harm to one or more persons, in a manner that is random and impossible to predict in advance; considers that the significance of the potential depends on the interplay between the severity of possible harm, the likelihood that the risk materializes and the manner in which the AI-system is being used;

14. Believes that an AI-system presents a high risk when its autonomous operation involves a significant potential to cause harm to one or more persons, in a manner that is random and goes beyond what can reasonably be expected from its intended use; considers that the significance of the potential depends on the interplay between the severity of possible harm, the likelihood that the risk materializes and the manner in which the AI-system is being used;

PE652.460v01-00 50/134 AM\1205914EN.docx

EN

Page 51: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

Or. en

Amendment 109Emmanuel Maurel, Manon AubryMotion for a resolutionParagraph 14

Motion for a resolution Amendment

14. Believes that an AI-system presents a high risk when its autonomous operation involves a significant potential to cause harm to one or more persons, in a manner that is random and impossible to predict in advance; considers that the significance of the potential depends on the interplay between the severity of possible harm, the likelihood that the risk materializes and the manner in which the AI-system is being used;

14. Believes that an AI-system presents a high risk when its autonomous operation involves a significant potential to cause physical, psychological or mental harm to one or more persons, in a manner that is random and impossible to predict in advance; considers that the significance of the potential depends on the interplay between the severity of possible harm, the likelihood that the risk materializes and the manner in which the AI-system is being used;

Or. fr

Amendment 110Emmanuel MaurelMotion for a resolutionParagraph 15

Motion for a resolution Amendment

15. Recommends that all high-risk AI-systems be listed in an Annex to the proposed Regulation; recognises that, given the rapid technological change and the required technical expertise, it should be up to the Commission to review that Annex every six months and if necessary, amend it through a delegated act; believes that the Commission should closely cooperate with a newly formed standing committee similar to the existing Standing Committee on Precursors or the Technical Committee on Motor Vehicles, which include national experts of the Member States and stakeholders; considers that the balanced membership of the ‘High-Level Expert Group on Artificial Intelligence’ could serve as an

15. Recommends that all high-risk AI-systems be listed in an Annex to the proposed Regulation; recognises that, given the rapid technological change and the required technical expertise, it should be up to the Commission to review that Annex every six months and if necessary, amend it through a delegated act; believes that the Commission should closely collaborate with the newly-established European Agency for Artificial Intelligence;

AM\1205914EN.docx 51/134 PE652.460v01-00

EN

Page 52: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

example for the formation of the group of stakeholders;

Or. fr

Amendment 111Daniel BudaMotion for a resolutionParagraph 15

Motion for a resolution Amendment

15. Recommends that all high-risk AI-systems be listed in an Annex to the proposed Regulation; recognises that, given the rapid technological change and the required technical expertise, it should be up to the Commission to review that Annex every six months and if necessary, amend it through a delegated act; believes that the Commission should closely cooperate with a newly formed standing committee similar to the existing Standing Committee on Precursors or the Technical Committee on Motor Vehicles, which include national experts of the Member States and stakeholders; considers that the balanced membership of the ‘High-Level Expert Group on Artificial Intelligence’ could serve as an example for the formation of the group of stakeholders;

15. Recommends that all high-risk AI-systems be exhaustively listed in an Annex to the proposed Regulation; recognises that, given the rapid technological change and the required technical expertise, it should be up to the Commission to review that Annex every six months and if necessary, amend it through a delegated act; believes that the Commission should closely cooperate with a newly formed standing committee similar to the existing Standing Committee on Precursors or the Technical Committee on Motor Vehicles, which include national experts of the Member States and stakeholders; considers that the balanced membership of the ‘High-Level Expert Group on Artificial Intelligence’ could serve as an example for the formation of the group of stakeholders; is also of the opinion that the European Parliament should appoint consultative experts to advise the newly established standing committee.

Or. ro

Amendment 112Gilles Lebreton, Gunnar Beck, Jean-Paul Garraud, Luisa RegimentiMotion for a resolutionParagraph 15

Motion for a resolution Amendment

15. Recommends that all high-risk AI-systems be listed in an Annex to the proposed Regulation; recognises that, given the rapid technological change and

15. Recommends that all high-risk AI-systems be listed in an Annex to the proposed Regulation; recognises that, given the rapid technological change and

PE652.460v01-00 52/134 AM\1205914EN.docx

EN

Page 53: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

the required technical expertise, it should be up to the Commission to review that Annex every six months and if necessary, amend it through a delegated act; believes that the Commission should closely cooperate with a newly formed standing committee similar to the existing Standing Committee on Precursors or the Technical Committee on Motor Vehicles, which include national experts of the Member States and stakeholders; considers that the balanced membership of the ‘High-Level Expert Group on Artificial Intelligence’ could serve as an example for the formation of the group of stakeholders;

the required technical expertise, it should be up to the Commission to review that Annex every six months and if necessary, amend it through a delegated act; believes that the Commission should closely cooperate with a newly formed standing committee similar to the existing Standing Committee on Precursors or the Technical Committee on Motor Vehicles, which include national experts of the Member States and stakeholders; considers that the balanced membership of the ‘High-Level Expert Group on Artificial Intelligence’ could serve as an example for the formation of the group of stakeholders, with the addition of ethics experts and anthropologists, sociologists and mental-health specialists;

Or. fr

Amendment 113Evelyne Gebhardt, Brando BenifeiMotion for a resolutionParagraph 15

Motion for a resolution Amendment

15. Recommends that all high-risk AI-systems be listed in an Annex to the proposed Regulation; recognises that, given the rapid technological change and the required technical expertise, it should be up to the Commission to review that Annex every six months and if necessary, amend it through a delegated act; believes that the Commission should closely cooperate with a newly formed standing committee similar to the existing Standing Committee on Precursors or the Technical Committee on Motor Vehicles, which include national experts of the Member States and stakeholders; considers that the balanced membership of the ‘High-Level Expert Group on Artificial Intelligence’ could serve as an example for the formation of the group of stakeholders;

15. Points out that a risk-based approach to AI within the existing liability framework might create unnecessary fragmentation across the EU creating legal uncertainty, interpretative issues and confusion amongst users who would face different levels of protection depending on whether the AI-systems is classified as high-or low- risk, which is something users cannot assess on their own; considers that, once the damage has occurred, it is irrelevant whether an AI system has been classified as high- or low-risks, and that what matters is that affected persons can obtain full compensation for the harm regardless of the risk category;

AM\1205914EN.docx 53/134 PE652.460v01-00

EN

Page 54: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

Or. en

Amendment 114Marion WalsmannMotion for a resolutionParagraph 15

Motion for a resolution Amendment

15. Recommends that all high-risk AI-systems be listed in an Annex to the proposed Regulation; recognises that, given the rapid technological change and the required technical expertise, it should be up to the Commission to review that Annex every six months and if necessary, amend it through a delegated act; believes that the Commission should closely cooperate with a newly formed standing committee similar to the existing Standing Committee on Precursors or the Technical Committee on Motor Vehicles, which include national experts of the Member States and stakeholders; considers that the balanced membership of the ‘High-Level Expert Group on Artificial Intelligence’ could serve as an example for the formation of the group of stakeholders;

15. recognises that, given the rapid technological change and the required technical expertise, the Commission should draw up a list of high-risk AI systems through a delegated act, in respect of which the European Parliament may raise its objections; believes that the Commission should closely cooperate with a newly formed standing committee similar to the existing Standing Committee on Precursors or the Technical Committee on Motor Vehicles, which include national experts of the Member States and stakeholders; considers that the balanced membership of the ‘High-Level Expert Group on Artificial Intelligence’ could serve as an example for the formation of the group of stakeholders;

Or. de

Amendment 115Kosma ZłotowskiMotion for a resolutionParagraph 15

Motion for a resolution Amendment

15. Recommends that all high-risk AI-systems be listed in an Annex to the proposed Regulation; recognises that, given the rapid technological change and the required technical expertise, it should be up to the Commission to review that Annex every six months and if necessary, amend it through a delegated act; believes that the Commission should closely cooperate with a newly formed standing committee similar to the existing Standing

15. Recommends that all high-risk AI-systems be listed in an Annex to the proposed Regulation; recognises that, given the rapid technological change and the required technical expertise, it should be up to the Commission to review that Annex every three months and if necessary, amend it through a delegated act; believes that the Commission should closely cooperate with a newly formed standing committee similar to the existing

PE652.460v01-00 54/134 AM\1205914EN.docx

EN

Page 55: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

Committee on Precursors or the Technical Committee on Motor Vehicles, which include national experts of the Member States and stakeholders; considers that the balanced membership of the ‘High-Level Expert Group on Artificial Intelligence’ could serve as an example for the formation of the group of stakeholders;

Standing Committee on Precursors or the Technical Committee on Motor Vehicles, which include national experts of the Member States and stakeholders; considers that the balanced membership of the ‘High-Level Expert Group on Artificial Intelligence’ could serve as an example for the formation of the group of stakeholders;

Or. en

Amendment 116Liesje Schreinemacher, Adrián Vázquez Lázara, Javier Nart, Karen MelchiorMotion for a resolutionParagraph 15

Motion for a resolution Amendment

15. Recommends that all high-risk AI-systems be listed in an Annex to the proposed Regulation; recognises that, given the rapid technological change and the required technical expertise, it should be up to the Commission to review that Annex every six months and if necessary, amend it through a delegated act; believes that the Commission should closely cooperate with a newly formed standing committee similar to the existing Standing Committee on Precursors or the Technical Committee on Motor Vehicles, which include national experts of the Member States and stakeholders; considers that the balanced membership of the ‘High-Level Expert Group on Artificial Intelligence’ could serve as an example for the formation of the group of stakeholders;

15. Recommends that all high-risk AI-systems be listed in an Annex to the proposed Regulation; recognises that, given the rapid technological developments and the required technical expertise, it should be up to the Commission to review that Annex every six months and if necessary, amend it through a delegated act; believes that the Commission should be informed by a newly formed standing committee similar to the existing Standing Committee on Precursors or the Technical Committee on Motor Vehicles, which include national experts of the Member States and stakeholders; considers that the balanced membership of the ‘High-Level Expert Group on Artificial Intelligence’ could serve as an example for the formation of the group of stakeholders;

Or. en

Amendment 117Kosma ZłotowskiMotion for a resolutionParagraph 15 a (new)

Motion for a resolution Amendment

AM\1205914EN.docx 55/134 PE652.460v01-00

EN

Page 56: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

15a. Notes that the development of technologies based on artificial intelligence is hugely dynamic and continuously accelerating; stresses that to ensure adequate protection for users, a fast-track approach is needed to analyse new devices and systems using the AI-systems that emerge on the European market, concerning potential risks; recommends Commission that all procedures in this regard should be simplified as much as possible;

Or. en

Amendment 118Emmanuel Maurel, Manon AubryMotion for a resolutionParagraph 15 a (new)

Motion for a resolution Amendment

15a. Acknowledges that the list thus created could not claim to be exhaustive, particularly with regard to final court rulings which may in the meantime have identified AI systems that do not appear on it;

Or. fr

Amendment 119Marion WalsmannMotion for a resolutionParagraph 16

Motion for a resolution Amendment

16. Believes that in line with strict liability systems of the Member States, the proposed Regulation should only cover harm to the important legally protected rights such as life, health, physical integrity and property, and should set out the amounts and extent of compensation as well as the limitation period;

16. Believes that the proposed Regulation should set the limitation period and, in line with strict liability systems of the Member States, should only cover harm to the important legally protected rights such as life, health, physical integrity and property, and in this context the Commission should examine whether the scope should be extended to include economic damage;

PE652.460v01-00 56/134 AM\1205914EN.docx

EN

Page 57: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

Or. de

Amendment 120Emmanuel Maurel, Manon AubryMotion for a resolutionParagraph 16

Motion for a resolution Amendment

16. Believes that in line with strict liability systems of the Member States, the proposed Regulation should only cover harm to the important legally protected rights such as life, health, physical integrity and property, and should set out the amounts and extent of compensation as well as the limitation period;

16. Believes that in line with strict liability systems of the Member States, the proposed Regulation should cover tangible and intangible harm to the most important legally protected rights such as life, health, physical, psychological and mental integrity and property, and should set out an indicative amount and extent of compensation as well as the limitation period;

Or. fr

Amendment 121Evelyne Gebhardt, Brando BenifeiMotion for a resolutionParagraph 16

Motion for a resolution Amendment

16. Believes that in line with strict liability systems of the Member States, the proposed Regulation should only cover harm to the important legally protected rights such as life, health, physical integrity and property, and should set out the amounts and extent of compensation as well as the limitation period;

16. Believes that due to the special characteristics of AI-systems, the proposed Regulation should cover material as well as non-material harm, including damage to intangible property and data, such as loss or leak of data, and should ensure that damage is always fully compensated, in compliance with the fundamental right of redress for damage suffered;

Or. en

Amendment 122Liesje Schreinemacher, Adrián Vázquez Lázara, Javier NartMotion for a resolutionParagraph 16

Motion for a resolution Amendment

16. Believes that in line with strict 16. Believes that the proposed

AM\1205914EN.docx 57/134 PE652.460v01-00

EN

Page 58: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

liability systems of the Member States, the proposed Regulation should only cover harm to the important legally protected rights such as life, health, physical integrity and property, and should set out the amounts and extent of compensation as well as the limitation period;

Regulation should cover harm to the important legally protected rights such as life, health, physical integrity, property and significant immaterial harm resulting in economic loss, and should set out the amounts and extent of compensation as well as the limitation period;

Or. en

Amendment 123Sergey LagodinskyMotion for a resolutionParagraph 16

Motion for a resolution Amendment

16. Believes that in line with strict liability systems of the Member States, the proposed Regulation should only cover harm to the important legally protected rights such as life, health, physical integrity and property, and should set out the amounts and extent of compensation as well as the limitation period;

16. Believes that in line with strict liability systems of the Member States, the proposed Regulation should only cover harm to the important legally protected rights such as life, health, physical integrity and property, and should set out the extent of compensation as well as the limitation period;

Or. en

Amendment 124Evelyne Gebhardt, Brando BenifeiMotion for a resolutionParagraph 17

Motion for a resolution Amendment

17. Determines that all activities, devices or processes driven by AI-systems that cause harm or damage but are not listed in the Annex to the proposed Regulation should remain subject to fault-based liability; believes that the affected person should nevertheless benefit from a presumption of fault of the deployer;

deleted

Or. en

Amendment 125

PE652.460v01-00 58/134 AM\1205914EN.docx

EN

Page 59: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

Daniel BudaMotion for a resolutionParagraph 17

Motion for a resolution Amendment

17. Determines that all activities, devices or processes driven by AI-systems that cause harm or damage but are not listed in the Annex to the proposed Regulation should remain subject to fault-based liability; believes that the affected person should nevertheless benefit from a presumption of fault of the deployer;

17. Determines that all activities, devices or processes driven by AI-systems that cause harm or damage but are not listed in the Annex to the proposed Regulation should remain subject to fault-based liability; believes that the affected person should nevertheless benefit from a presumption of fault of the deployer; considers, in this connection, that the national law governing the amount and extent of the compensation and the deadline for claims against injury caused by the AI system remains applicable;

Or. ro

Amendment 126Emil RadevMotion for a resolutionParagraph 17

Motion for a resolution Amendment

17. Determines that all activities, devices or processes driven by AI-systems that cause harm or damage but are not listed in the Annex to the proposed Regulation should remain subject to fault-based liability; believes that the affected person should nevertheless benefit from a presumption of fault of the deployer;

17. Determines that all activities, devices or processes driven by AI-systems that cause harm or damage but are not listed in the Annex to the proposed Regulation should remain subject to fault-based liability; believes that the affected person should nevertheless benefit from a presumption of fault of the deployer and that while that presumption of fault is rebuttable, the burden of proof lies with the deployer;

Or. bg

Amendment 127Sergey LagodinskyMotion for a resolutionParagraph 17

AM\1205914EN.docx 59/134 PE652.460v01-00

EN

Page 60: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

Motion for a resolution Amendment

17. Determines that all activities, devices or processes driven by AI-systems that cause harm or damage but are not listed in the Annex to the proposed Regulation should remain subject to fault-based liability; believes that the affected person should nevertheless benefit from a presumption of fault of the deployer;

17. Determines that the criteria defining the level of risk of harm or damage caused by AI-systems but not listed in the Annex to the proposed Regulation should remain subject to fault-based liability; believes that the affected person should nevertheless benefit from a presumption of fault of the operator, who can exculpate itself by proof of abiding by duty of care;

Or. en

Amendment 128Liesje Schreinemacher, Adrián Vázquez Lázara, Javier Nart, Karen MelchiorMotion for a resolutionParagraph 17

Motion for a resolution Amendment

17. Determines that all activities, devices or processes driven by AI-systems that cause harm or damage but are not listed in the Annex to the proposed Regulation should remain subject to fault-based liability; believes that the affected person should nevertheless benefit from a presumption of fault of the deployer;

17. Determines that all activities, devices or processes driven by AI-systems that cause harm or damage but are not listed in the Annex to the proposed Regulation should remain subject to fault-based liability; believes that the affected person should nevertheless benefit from a presumption of fault of the operator;

Or. en

Amendment 129Kosma ZłotowskiMotion for a resolutionParagraph 17 a (new)

Motion for a resolution Amendment

17a. Stresses that the producer's liability for system or device based on artificial intelligence solutions should be consistently linked to the impossibility of contractually excluding civil liability in this respect - including in Businesses-to-Businesses and Businesses-to-

PE652.460v01-00 60/134 AM\1205914EN.docx

EN

Page 61: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

Administration relations;

Or. en

Amendment 130Sergey LagodinskyMotion for a resolutionParagraph 17 a (new)

Motion for a resolution Amendment

17a. Requests the Commission to evaluate the need for regulation on contracts to prevent contractual non-liability clauses.

Or. en

Amendment 131Daniel BudaMotion for a resolutionParagraph 18

Motion for a resolution Amendment

18. Considers the liability risk to be one of the key factors that defines the success of new technologies, products and services; observes that proper risk coverage is also essential for assuring the public that it can trust the new technology despite the potential for suffering harm or for facing legal claims by affected persons;

18. Considers the liability risk to be one of the key factors that defines the success of new technologies, products and services; observes that proper risk coverage is also essential for assuring the public that it can trust the new technology despite the potential for suffering harm or for facing legal claims by affected persons; notes at the same time that this regulatory system focuses on the need to exploit and enhance the advantages of AI-systems, while putting in place robust safeguards;

Or. ro

Amendment 132Evelyne Gebhardt, Brando BenifeiMotion for a resolutionParagraph 18

Motion for a resolution Amendment

18. Considers the liability risk to be one of the key factors that defines the

18. Considers the liability coverage to be one of the key factors that defines the

AM\1205914EN.docx 61/134 PE652.460v01-00

EN

Page 62: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

success of new technologies, products and services; observes that proper risk coverage is also essential for assuring the public that it can trust the new technology despite the potential for suffering harm or for facing legal claims by affected persons;

success of new technologies, products and services; observes that proper liability coverage is also essential for assuring the public that it can trust the new technology despite the potential for suffering harm or for facing legal claims by affected persons;

Or. en

Amendment 133Emmanuel Maurel, Manon AubryMotion for a resolutionParagraph 18

Motion for a resolution Amendment

18. Considers the liability risk to be one of the key factors that defines the success of new technologies, products and services; observes that proper risk coverage is also essential for assuring the public that it can trust the new technology despite the potential for suffering harm or for facing legal claims by affected persons;

18. Considers that this liability regime, which guarantees full compensation for any harm caused by AI systems, means that their developers and producers must pay greater attention to the security of those systems; points out that full risk coverage is essential for assuring the public that it can trust the new technology;

Or. fr

Amendment 134Sergey LagodinskyMotion for a resolutionParagraph 18 a (new)

Motion for a resolution Amendment

18a. Is mindful of the fact that uncertainty regarding risks should not make insurance premiums prohibitively high and thus be an obstacle to research and innovation; proposes that a special mechanism between the Commission and the insurance industry should be developed to address the potential uncertainties in the insurance branch;

Or. en

Amendment 135

PE652.460v01-00 62/134 AM\1205914EN.docx

EN

Page 63: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

Evelyne Gebhardt, Brando BenifeiMotion for a resolutionParagraph 19

Motion for a resolution Amendment

19. Is of the opinion that, based on the significant potential to cause harm and by taking Directive 2009/103/EC7 into account, all deployers of high-risk AI-systems listed in the Annex to the proposed Regulation should hold liability insurance; considers that such a mandatory insurance regime for high-risk AI-systems should cover the amounts and the extent of compensation laid down by the proposed Regulation;

19. Is of the opinion that, a mandatory insurance regime for all AI systems is not the right approach;

_________________7 OJ L 263, 7.10.2009, p. 11.

Or. en

Amendment 136Sergey LagodinskyMotion for a resolutionParagraph 19

Motion for a resolution Amendment

19. Is of the opinion that, based on the significant potential to cause harm and by taking Directive 2009/103/EC7 into account, all deployers of high-risk AI-systems listed in the Annex to the proposed Regulation should hold liability insurance; considers that such a mandatory insurance regime for high-risk AI-systems should cover the amounts and the extent of compensation laid down by the proposed Regulation;

19. Is of the opinion that, based on the significant potential to cause harm and by taking Directive 2009/103/EC7 into account, all operators of high-risk AI-systems listed in the Annex to the proposed Regulation should hold liability insurance; considers that such a mandatory insurance regime for high-risk AI-systems should cover the amounts and the extent of compensation laid down by the proposed Regulation; is mindful of the fact that such technology is currently still very rare, since it presupposes a high degree of autonomous decision making and that, thus, the current proposals are mostly future oriented;

_________________ _________________7 OJ L 263, 7.10.2009, p. 11. 7 OJ L 263, 7.10.2009, p. 11.

AM\1205914EN.docx 63/134 PE652.460v01-00

EN

Page 64: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

Or. en

Amendment 137Marion WalsmannMotion for a resolutionParagraph 19

Motion for a resolution Amendment

19. Is of the opinion that, based on the significant potential to cause harm and by taking Directive 2009/103/EC7 into account, all deployers of high-risk AI-systems listed in the Annex to the proposed Regulation should hold liability insurance; considers that such a mandatory insurance regime for high-risk AI-systems should cover the amounts and the extent of compensation laid down by the proposed Regulation;

19. Is of the opinion that, based on the significant potential to cause harm and by taking Directive 2009/103/EC7 into account, all deployers of high-risk AI-systems listed in the Annex to the proposed Regulation should hold liability insurance;

_________________ _________________7 OJ L 263, 7.10.2009, p. 11. 7 OJ L 263, 7.10.2009, p. 11.

Or. de

Amendment 138Liesje Schreinemacher, Adrián Vázquez Lázara, Javier Nart, Karen MelchiorMotion for a resolutionParagraph 19

Motion for a resolution Amendment

19. Is of the opinion that, based on the significant potential to cause harm and by taking Directive 2009/103/EC7 into account, all deployers of high-risk AI-systems listed in the Annex to the proposed Regulation should hold liability insurance; considers that such a mandatory insurance regime for high-risk AI-systems should cover the amounts and the extent of compensation laid down by the proposed Regulation;

19. Is of the opinion that, based on the significant potential to cause harm and by taking Directive 2009/103/EC7 into account, all operators of high-risk AI-systems listed in the Annex to the proposed Regulation should hold liability insurance; considers that such a mandatory insurance regime for high-risk AI-systems should cover the amounts and the extent of compensation laid down by the proposed Regulation;

_________________ _________________7 OJ L 263, 7.10.2009, p. 11. 7 OJ L 263, 7.10.2009, p. 11.

Or. en

PE652.460v01-00 64/134 AM\1205914EN.docx

EN

Page 65: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

Amendment 139Evelyne Gebhardt, Brando BenifeiMotion for a resolutionParagraph 20

Motion for a resolution Amendment

20. Believes that a European compensation mechanism, funded with public money, is not the right way to fill potential insurance gaps; considers that bearing the good experience with regulatory sandboxes in the fintech sector in mind, it should be up to the insurance market to adjust existing products or create new insurance cover for the numerous sectors and various different technologies, products and services that involve AI-systems;

deleted

Or. en

Amendment 140Liesje Schreinemacher, Adrián Vázquez Lázara, Javier Nart, Karen MelchiorMotion for a resolutionParagraph 20

Motion for a resolution Amendment

20. Believes that a European compensation mechanism, funded with public money, is not the right way to fill potential insurance gaps; considers that bearing the good experience with regulatory sandboxes in the fintech sector in mind, it should be up to the insurance market to adjust existing products or create new insurance cover for the numerous sectors and various different technologies, products and services that involve AI-systems;

20. Believes that a European compensation mechanism, funded with public money, is not the right way to fill potential insurance gaps; considers that a lack of data of the risks associated with AI-systems make it difficult for the insurance sector to come up with adapted or new insurance products; considers that leaving the development of a mandatory insurance entirely to the market is likely to result in a one size fits all approach with disproportionate high prices and the wrong incentives, stimulating operators to opt for the cheapest insurance rather than for the best coverage; considers that the Commission should work closely with the insurance sector to see how data and innovative models can be used to create the insurances that offer adequate

AM\1205914EN.docx 65/134 PE652.460v01-00

EN

Page 66: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

coverage for an affordable price.

Or. en

Amendment 141Emmanuel MaurelMotion for a resolutionParagraph 20

Motion for a resolution Amendment

20. Believes that a European compensation mechanism, funded with public money, is not the right way to fill potential insurance gaps; considers that bearing the good experience with regulatory sandboxes in the fintech sector in mind, it should be up to the insurance market to adjust existing products or create new insurance cover for the numerous sectors and various different technologies, products and services that involve AI-systems;

20. Believes that it is up to insurers to adjust existing products or create new insurance cover for the numerous sectors and various different technologies, products and services that involve AI-systems;

Or. fr

Amendment 142Brando BenifeiMotion for a resolutionParagraph 20

Motion for a resolution Amendment

20. Believes that a European compensation mechanism, funded with public money, is not the right way to fill potential insurance gaps; considers that bearing the good experience with regulatory sandboxes in the fintech sector in mind, it should be up to the insurance market to adjust existing products or create new insurance cover for the numerous sectors and various different technologies, products and services that involve AI-systems;

20. Underlines that the sole grounds that harm or damage is caused by a non-human agent should not limit the extent of the damages which may be recovered, nor should it limit the forms of compensation which may be offered to the party suffering said harm or damage;

Or. en

PE652.460v01-00 66/134 AM\1205914EN.docx

EN

Page 67: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

Amendment 143Sergey LagodinskyMotion for a resolutionParagraph 20

Motion for a resolution Amendment

20. Believes that a European compensation mechanism, funded with public money, is not the right way to fill potential insurance gaps; considers that bearing the good experience with regulatory sandboxes in the fintech sector in mind, it should be up to the insurance market to adjust existing products or create new insurance cover for the numerous sectors and various different technologies, products and services that involve AI-systems;

20. Believes that a European compensation mechanism, funded with public money, is not the right way to fill potential insurance gaps; considers that, notwithstanding the aforementioned mechanism between the Commission and the insurance branch, bearing the good experience with regulatory sandboxes in the fintech sector in mind, it should be up to the insurance market to adjust existing products or create new insurance cover for the numerous sectors and various different technologies, products and services that involve AI-systems;

Or. en

Amendment 144Sergey LagodinskyMotion for a resolutionAnnex I – part A – paragraph 1 – introductory part

Motion for a resolution Amendment

This Report is addressing an important aspect of digitisation, which itself is shaped by cross-border activities and global competition. The following principles should serve as guidance:

This Report is addressing an important aspect of digitisation, which itself is shaped by cross-border activities, global competition and core societal considerations. The following principles should serve as guidance:

Or. en

Amendment 145Sergey LagodinskyMotion for a resolutionAnnex I – part A – paragraph 1 – indent 1

Motion for a resolution Amendment

- A genuine Digital Single Market requires full harmonisation by a

- A genuine Digital Single Market requires a level of full harmonisation with

AM\1205914EN.docx 67/134 PE652.460v01-00

EN

Page 68: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

Regulation. the objective of not lowering the legal protection of citizens throughout the Union.

Or. en

Amendment 146Sergey LagodinskyMotion for a resolutionAnnex I – part A – paragraph 1 – indent 2

Motion for a resolution Amendment

- New legal challenges posed by the deployment of Artificial Intelligence (AI)-systems have to be addressed by establishing maximal legal certainty for the producer, the deployer, the affected person and any other third party.

- New legal challenges posed by the development of Artificial Intelligence (AI)-systems have to be addressed by establishing maximal legal certainty throughout the liability chain, including the producer, the operator, the affected person and any other third party.

Or. en

Amendment 147Evelyne Gebhardt, Brando BenifeiMotion for a resolutionAnnex I – part A – paragraph 1 – indent 2

Motion for a resolution Amendment

- New legal challenges posed by the deployment of Artificial Intelligence (AI)-systems have to be addressed by establishing maximal legal certainty for the producer, the deployer, the affected person and any other third party.

- New legal challenges posed by the deployment of Artificial Intelligence (AI)-systems have to be addressed by establishing maximal legal certainty for the producer, the frontend operator, the backend operator, the affected person and any other third party.

Or. en

Amendment 148Liesje Schreinemacher, Adrián Vázquez Lázara, Javier Nart, Karen MelchiorMotion for a resolutionAnnex I – part A – paragraph 1 – indent 2

Motion for a resolution Amendment

- New legal challenges posed by the deployment of Artificial Intelligence (AI)-

- New legal challenges posed by the deployment of Artificial Intelligence (AI)-

PE652.460v01-00 68/134 AM\1205914EN.docx

EN

Page 69: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

systems have to be addressed by establishing maximal legal certainty for the producer, the deployer, the affected person and any other third party.

systems have to be addressed by establishing maximal legal certainty for the producer, the operator, the affected person and any other third party.

Or. en

Amendment 149Emmanuel Maurel, Manon AubryMotion for a resolutionAnnex I – part A – paragraph 1 – indent 3

Motion for a resolution Amendment

- There should be no over-regulation as this would hamper European innovation in AI, especially if the technology, product or service is developed by SMEs or start-ups.

deleted

Or. fr

Amendment 150Marion WalsmannMotion for a resolutionAnnex I – part A – paragraph 1 – indent 3

Motion for a resolution Amendment

- There should be no over-regulation as this would hamper European innovation in AI, especially if the technology, product or service is developed by SMEs or start-ups.

- There should be no over-regulation and more red tape must be prevented, as this would hamper European innovation in AI, especially if the technology, product or service is developed by SMEs or start-ups.

Or. de

Amendment 151Sergey LagodinskyMotion for a resolutionAnnex I – part A – paragraph 1 – indent 3

Motion for a resolution Amendment

- There should be no over-regulation as this would hamper European innovation in AI, especially if the technology, product or service is developed by SMEs or start-ups.

- There should be no over-regulation nor legal uncertainty as this would hamper European innovation in AI, especially if the technology, product or service is developed by SMEs or start-ups.

AM\1205914EN.docx 69/134 PE652.460v01-00

EN

Page 70: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

Or. en

Amendment 152Daniel BudaMotion for a resolutionAnnex I – part A – paragraph 1 – indent 3 a (new)

Motion for a resolution Amendment

- Civil liability standards for artificial intelligence should seek to strike a balance between the protection of the public on the one hand and business incentives to invest in innovation, especially AI systems, on the other;

Or. ro

Amendment 153Emmanuel Maurel, Manon AubryMotion for a resolutionAnnex I – part A – paragraph 1 – indent 4

Motion for a resolution Amendment

- Instead of replacing the well-functioning existing liability regimes, we should make a few specific adjustments by introducing new and future-orientated ideas.

- Instead of replacing the well-functioning existing liability regimes, we should make specific adjustments by introducing ideas which are legally operational and adapted to the specificities of AI systems;

Or. fr

Amendment 154Evelyne Gebhardt, Brando BenifeiMotion for a resolutionAnnex I – part A – paragraph 1 – indent 4

Motion for a resolution Amendment

- Instead of replacing the well-functioning existing liability regimes, we should make a few specific adjustments by introducing new and future-orientated ideas.

- Instead of replacing the well-functioning existing liability regimes, we should make some necessary adjustments by introducing new and future-orientated ideas.

Or. en

PE652.460v01-00 70/134 AM\1205914EN.docx

EN

Page 71: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

Amendment 155Sergey LagodinskyMotion for a resolutionAnnex I – part A – paragraph 1 – indent 5

Motion for a resolution Amendment

- This Report and the Product Liability Directive are two pillars of a common liability framework for AI-systems and require close coordination between all political actors.

- This Report and the Product Liability Directive are two pillars of a common liability framework for AI-systems and require close coordination between all political actors, at Union and national levels.

Or. en

Amendment 156Marion WalsmannMotion for a resolutionAnnex I – part A – paragraph 1 – indent 5

Motion for a resolution Amendment

- This Report and the Product Liability Directive are two pillars of a common liability framework for AI-systems and require close coordination between all political actors.

- This Report and the Product Liability Directive are two pillars of a common liability framework for AI-systems and require close coordination and alignment between all political actors.

Or. de

Amendment 157Gilles Lebreton, Gunnar Beck, Jean-Paul Garraud, Luisa RegimentiMotion for a resolutionAnnex I – part A – paragraph 1 – indent 6

Motion for a resolution Amendment

- Citizens need to be entitled to the same level of protection and rights, no matter if the harm is caused by an AI-system or not, or if it takes place physically or virtually.

- Citizens need to be entitled to the same level of protection and rights, no matter if the harm is caused by an AI-system or not, if it takes place physically or virtually, or if it is material or non-material. As set out in the Commission Communication of 19 February 2020 on the safety and liability implications of AI and robotics, ‘explicit obligations for producers, among others, of humanoid AI to explicitly take into account the non-

AM\1205914EN.docx 71/134 PE652.460v01-00

EN

Page 72: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

material damage that their products could cause to users, in particular vulnerable users such as elderly people in care settings’ should be taken into account in this EU legislation.

Or. fr

Amendment 158Marion WalsmannMotion for a resolutionAnnex I – part A – paragraph 1 – indent 6

Motion for a resolution Amendment

- Citizens need to be entitled to the same level of protection and rights, no matter if the harm is caused by an AI-system or not, or if it takes place physically or virtually.

- Citizens need to be entitled to the same level of protection and rights, no matter if the harm is caused by an AI-system or not, or if it takes place physically or virtually so that their confidence in this new technology is strengthened.

Or. de

Amendment 159Sergey LagodinskyMotion for a resolutionAnnex I – part A – paragraph 1 – indent 6

Motion for a resolution Amendment

- Citizens need to be entitled to the same level of protection and rights, no matter if the harm is caused by an AI-system or not, or if it takes place physically or virtually.

- Citizens need to be entitled to the same level of protection and rights, no matter if the harm is caused by an AI-system or not, or if it takes place physically, materially, immaterially or virtually.

Or. en

Amendment 160Sergey LagodinskyMotion for a resolutionAnnex I – part B – citation 4 a (new)

Motion for a resolution Amendment

Having regard to Article 169 of the Treaty on the Functioning of the European

PE652.460v01-00 72/134 AM\1205914EN.docx

EN

Page 73: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

Union,

Or. en

Amendment 161Emmanuel Maurel, Manon AubryMotion for a resolutionAnnex I – part B – recital 1

Motion for a resolution Amendment

(1) The concept of ‘liability’ plays an important double role in our daily life: on the one hand, it ensures that a person who has suffered harm or damage is entitled to claim compensation from the party proven to be liable for that harm or damage, and on the other hand, it provides the economic incentives for persons to avoid causing harm or damage in the first place. Any liability framework should strive to strike a balance between efficiently protecting potential victims of damage and at the same time, providing enough leeway to make the development of new technologies, products or services possible.

(1) The concept of ‘liability’ plays an important double role in our daily life: on the one hand, it ensures that a person who has suffered harm or damage is entitled to claim compensation from the party proven to be liable for that harm or damage, and on the other hand, it provides the economic incentives for persons to avoid causing harm or damage in the first place.

Or. fr

Amendment 162Daniel BudaMotion for a resolutionAnnex I – part B – recital 1

Motion for a resolution Amendment

(1) The concept of ‘liability’ plays an important double role in our daily life: on the one hand, it ensures that a person who has suffered harm or damage is entitled to claim compensation from the party proven to be liable for that harm or damage, and on the other hand, it provides the economic incentives for persons to avoid causing harm or damage in the first place. Any liability framework should strive to strike a balance between efficiently protecting potential victims of damage and at the

(1) The concept of ‘liability’ plays an important double role in our daily life: on the one hand, it ensures that a person who has suffered harm or damage is entitled to claim compensation from the party proven to be liable for that harm or damage, and on the other hand, it provides the economic incentives for persons to avoid causing harm or damage in the first place. Any liability framework should strive to instil confidence in the safety, reliability and consistency of products and services,

AM\1205914EN.docx 73/134 PE652.460v01-00

EN

Page 74: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

same time, providing enough leeway to make the development of new technologies, products or services possible.

including digital technologies, in order to strike a balance between efficiently protecting potential victims of damage and at the same time, providing enough leeway to make the development of new technologies, products or services possible.

Or. ro

Amendment 163Evelyne Gebhardt, Brando BenifeiMotion for a resolutionAnnex I – part B – recital 1

Motion for a resolution Amendment

(1) The concept of ‘liability’ plays an important double role in our daily life: on the one hand, it ensures that a person who has suffered harm or damage is entitled to claim compensation from the party proven to be liable for that harm or damage, and on the other hand, it provides the economic incentives for persons to avoid causing harm or damage in the first place. Any liability framework should strive to strike a balance between efficiently protecting potential victims of damage and at the same time, providing enough leeway to make the development of new technologies, products or services possible.

(1) The concept of ‘liability’ plays an important double role in our daily life: on the one hand, it ensures that a person who has suffered harm or damage is entitled to claim compensation from the party proven to be liable for that harm or damage, and on the other hand, it provides the incentives for persons to avoid causing harm or damage in the first place. Any liability framework should protect potential victims of damage, ensuring that victims are able to a claim for compensation in all cases where this seems justified.

Or. en

Amendment 164Gilles Lebreton, Gunnar Beck, Jean-Paul Garraud, Luisa RegimentiMotion for a resolutionAnnex I – part B – recital 1

Motion for a resolution Amendment

(1) The concept of ‘liability’ plays an important double role in our daily life: on the one hand, it ensures that a person who has suffered harm or damage is entitled to claim compensation from the party proven to be liable for that harm or damage, and on the other hand, it provides the economic

(1) The concept of ‘liability’ plays an important double role in our daily life: on the one hand, it ensures that a person who has suffered harm or damage is entitled to claim compensation from the party proven to be liable for that harm or damage, and on the other hand, it provides the economic

PE652.460v01-00 74/134 AM\1205914EN.docx

EN

Page 75: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

incentives for persons to avoid causing harm or damage in the first place. Any liability framework should strive to strike a balance between efficiently protecting potential victims of damage and at the same time, providing enough leeway to make the development of new technologies, products or services possible.

incentives for persons to avoid causing harm or damage, whether material or non-material, in the first place. Any liability framework should strive to strike a balance between efficiently protecting potential victims of damage and at the same time, providing enough leeway to make the development of new technologies, products or services possible.

Or. fr

Amendment 165Sergey LagodinskyMotion for a resolutionAnnex I – part B – recital 1

Motion for a resolution Amendment

(1) The concept of ‘liability’ plays an important double role in our daily life: on the one hand, it ensures that a person who has suffered harm or damage is entitled to claim compensation from the party proven to be liable for that harm or damage, and on the other hand, it provides the economic incentives for persons to avoid causing harm or damage in the first place. Any liability framework should strive to strike a balance between efficiently protecting potential victims of damage and at the same time, providing enough leeway to make the development of new technologies, products or services possible.

(1) The concept of ‘liability’ plays an important double role in our daily life: on the one hand, it ensures that a person who has suffered harm or damage is entitled to claim and get compensation from the party held liable for that harm or damage, and on the other hand, it provides the economic incentives for persons to avoid causing harm or damage in the first place. Any liability framework should strive to strike a balance between efficiently protecting potential victims of damage and at the same time, providing enough leeway to make the development of new technologies, products or services possible.

Or. en

Amendment 166Liesje Schreinemacher, Adrián Vázquez Lázara, Javier Nart, Karen MelchiorMotion for a resolutionAnnex I – part B – recital 1

Motion for a resolution Amendment

(2) Especially at the beginning of the life cycle of new products and services, there is a certain degree of risk for the user as well as for third persons that something does not function properly. This process of

(2) Especially at the beginning of the life cycle of new products and services, there is a certain degree of risk for the user as well as for third persons that something does not function properly. This process of

AM\1205914EN.docx 75/134 PE652.460v01-00

EN

Page 76: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

trial-and-error is at the same time a key enabler of technical progress without which most of our technologies would not exist. So far, the accompanying risks of new products and services have been properly mitigated by strong product safety legislation and liability rules.

trial-and-error is at the same time a key enabler of technical progress without which most of our technologies would not exist. So far, the accompanying risks of new products and services have been mitigated by strong product safety legislation and they need to continue to be properly implemented and reviewed where necessary.

Or. en

Amendment 167Evelyne Gebhardt, Brando BenifeiMotion for a resolutionAnnex I – part B – recital 2

Motion for a resolution Amendment

(2) Especially at the beginning of the life cycle of new products and services, there is a certain degree of risk for the user as well as for third persons that something does not function properly. This process of trial-and-error is at the same time a key enabler of technical progress without which most of our technologies would not exist. So far, the accompanying risks of new products and services have been properly mitigated by strong product safety legislation and liability rules.

(2) Not only at the beginning of the life cycle of new products and services, but also at later stages, due to modifications through updates, there is a certain degree of risk for the user as well as for third persons that something does not function properly. So far, the accompanying risks of new products and services have been properly mitigated by strong product safety legislation and liability rules.

Or. en

Amendment 168Sergey LagodinskyMotion for a resolutionAnnex I – part B – recital 2

Motion for a resolution Amendment

(2) Especially at the beginning of the life cycle of new products and services, there is a certain degree of risk for the user as well as for third persons that something does not function properly. This process of trial-and-error is at the same time a key enabler of technical progress without which most of our technologies would not

(2) Especially at the beginning of the life cycle of new products and services, after those were pre-tested, there is a certain degree of risk for the user as well as for third persons that something does not function properly. This process of trial-and-error is at the same time a key enabler of technical progress without which most

PE652.460v01-00 76/134 AM\1205914EN.docx

EN

Page 77: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

exist. So far, the accompanying risks of new products and services have been properly mitigated by strong product safety legislation and liability rules.

of our technologies would not exist. So far, the accompanying risks of new products and services have been properly mitigated by strong product safety legislation and liability rules.

Or. en

Amendment 169Evelyne Gebhardt, Brando BenifeiMotion for a resolutionAnnex I – part B – recital 3

Motion for a resolution Amendment

(3) The rise of Artificial intelligence (AI) however presents a significant challenge for the existing liability frameworks. Using AI-systems in our daily life will lead to situations in which their opacity (“black box” element) makes it extremely expensive or even impossible to identify who was in control of the risk of using the AI-system in question or which code or input has caused the harmful operation. This difficulty is even compounded by the connectivity between an AI-system and other AI-systems and non-AI-systems, by its dependency on external data, by its vulnerability to cybersecurity breaches as well as by the increasing autonomy of AI-systems triggered by machine-learning and deep-learning capabilities. Besides these complex features and potential vulnerabilities, AI-systems could also be used to cause severe harm, such as compromising our values and freedoms by tracking individuals against their will, by introducing Social Credit Systems or by constructing lethal autonomous weapon systems.

(3) The rise of Artificial intelligence (AI) and other emerging digital technologies, such as the Internet of Things or distributed ledger technologies however presents a significant challenge for the existing liability frameworks. Using AI-systems in our daily life will lead to situations in which their opacity (“black box” element), complexity, modification through updates or self-learning during operation, limited predictability, and vulnerability to cybersecurity threats makes it extremely difficult or even impossible to identify who was in control of the risk of using the AI-system in question or which code or input has caused the harmful operation. This difficulty is even compounded by the connectivity between an AI-system and other AI-systems and non-AI-systems, by its dependency on external data, by its vulnerability to cybersecurity breaches as well as by the increasing autonomy of AI-systems triggered by machine-learning and deep-learning capabilities. Besides these complex features and potential vulnerabilities, AI-systems could also be used to cause severe harm, such as compromising our values and freedoms by tracking individuals against their will, by introducing Social Credit Systems or by constructing lethal autonomous weapon systems.

AM\1205914EN.docx 77/134 PE652.460v01-00

EN

Page 78: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

Or. en

Amendment 170Gilles Lebreton, Gunnar Beck, Jean-Paul Garraud, Luisa RegimentiMotion for a resolutionAnnex I – part B – recital 3

Motion for a resolution Amendment

(3) The rise of Artificial intelligence (AI) however presents a significant challenge for the existing liability frameworks. Using AI-systems in our daily life will lead to situations in which their opacity (“black box” element) makes it extremely expensive or even impossible to identify who was in control of the risk of using the AI-system in question or which code or input has caused the harmful operation. This difficulty is even compounded by the connectivity between an AI-system and other AI-systems and non-AI-systems, by its dependency on external data, by its vulnerability to cybersecurity breaches as well as by the increasing autonomy of AI-systems triggered by machine-learning and deep-learning capabilities. Besides these complex features and potential vulnerabilities, AI-systems could also be used to cause severe harm, such as compromising our values and freedoms by tracking individuals against their will, by introducing Social Credit Systems or by constructing lethal autonomous weapon systems.

(3) The rise of Artificial intelligence (AI) however presents a significant challenge for the existing liability frameworks. Using AI-systems in our daily life will lead to situations in which their opacity (“black box” element) makes it extremely expensive or even impossible to identify who was in control of the risk of using the AI-system in question or which code or input has caused the harmful operation. This difficulty is even compounded by the connectivity between an AI-system and other AI-systems and non-AI-systems, by its dependency on external data, by its vulnerability to cybersecurity breaches as well as by the increasing autonomy of AI-systems triggered by machine-learning and deep-learning capabilities. Besides these complex features and potential vulnerabilities, AI-systems could also be used to cause severe harm, such as compromising our human dignity, our values and freedoms by tracking individuals against their will, by introducing Social Credit Systems or by constructing lethal autonomous weapon systems, or to take biased decisions in matters of health insurance, credit provision, court orders or recruitment or employment decisions.

Or. fr

Amendment 171Liesje Schreinemacher, Adrián Vázquez Lázara, Javier Nart, Karen Melchior, Svenja HahnMotion for a resolutionAnnex I – part B – recital 3

PE652.460v01-00 78/134 AM\1205914EN.docx

EN

Page 79: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

Motion for a resolution Amendment

(3) The rise of Artificial intelligence (AI) however presents a significant challenge for the existing liability frameworks. Using AI-systems in our daily life will lead to situations in which their opacity (“black box” element) makes it extremely expensive or even impossible to identify who was in control of the risk of using the AI-system in question or which code or input has caused the harmful operation. This difficulty is even compounded by the connectivity between an AI-system and other AI-systems and non-AI-systems, by its dependency on external data, by its vulnerability to cybersecurity breaches as well as by the increasing autonomy of AI-systems triggered by machine-learning and deep-learning capabilities. Besides these complex features and potential vulnerabilities, AI-systems could also be used to cause severe harm, such as compromising our values and freedoms by tracking individuals against their will, by introducing Social Credit Systems or by constructing lethal autonomous weapon systems.

(3) The rise of Artificial intelligence (AI) however presents a significant challenge for the existing liability frameworks. Using AI-systems in our daily life will lead to situations in which their opacity (“black box” element) and the multitude of actors who intervene in their life-cycle make it extremely expensive or even impossible to identify who was in control of the risk of using the AI-system in question or which code or input has caused the harmful operation. This difficulty is even compounded by the connectivity between an AI-system and other AI-systems and non-AI-systems, by its dependency on external data, by its vulnerability to cybersecurity breaches as well as by the increasing autonomy of AI-systems triggered by machine-learning and deep-learning capabilities. Besides these complex features and potential vulnerabilities, AI-systems could also be used to cause severe harm, such as compromising our values and freedoms by tracking individuals against their will, by introducing Social Credit Systems or by constructing lethal autonomous weapon systems.

Or. en

Amendment 172Sergey LagodinskyMotion for a resolutionAnnex I – part B – recital 3

Motion for a resolution Amendment

(3) The rise of Artificial intelligence (AI) however presents a significant challenge for the existing liability frameworks. Using AI-systems in our daily life will lead to situations in which their opacity (“black box” element) makes it extremely expensive or even impossible to identify who was in control of the risk of using the AI-system in question or which

(3) The rise of Artificial intelligence (AI) however presents a significant challenge for the existing liability frameworks. Some regulatory safeguards should prevent that AI-systems’ complexity leads to opacity for their users, and should enable to identify who was in control of the risk of using the AI-system in question or which code or input has

AM\1205914EN.docx 79/134 PE652.460v01-00

EN

Page 80: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

code or input has caused the harmful operation. This difficulty is even compounded by the connectivity between an AI-system and other AI-systems and non-AI-systems, by its dependency on external data, by its vulnerability to cybersecurity breaches as well as by the increasing autonomy of AI-systems triggered by machine-learning and deep-learning capabilities. Besides these complex features and potential vulnerabilities, AI-systems could also be used to cause severe harm, such as compromising our values and freedoms by tracking individuals against their will, by introducing Social Credit Systems or by constructing lethal autonomous weapon systems.

caused the harmful operation, while acknowledging the difficulty in this task. This difficulty results from the fact that AI-systems and non-AI-systems depend on external data, and are subject to imperfection, resulting in the possibility for vulnerability to cybersecurity breaches, as well as from the design of increasingly autonomous AI-systems using, inter alia machine-learning and deep-learning techniques. Besides these complex features and potential vulnerabilities, AI-systems could also be used to cause severe harm, such as compromising our values, rights and freedoms by tracking individuals against their will, by introducing Social Credit Systems or by constructing lethal weapon systems.

Or. en

Amendment 173Sergey LagodinskyMotion for a resolutionAnnex I – part B – recital 4

Motion for a resolution Amendment

(4) At this point, it is important to point out that the advantages of deploying AI-systems will by far outweigh the disadvantages. They will help to fight climate change more effectively, to improve medical examinations, to better integrate disabled persons into the society and to provide tailor-made education courses to all types of students. To exploit the various technological opportunities and to boost people’s trust in the use of AI-systems, while at the same time preventing harmful scenarios, sound ethical standards combined with solid and fair compensation is the best way forward.

(4) At this point, it is important to point out that public and private stakeholders should endeavour to make the advantages of deploying AI-systems outweighing the disadvantages. To this purpose, any regulatory intervention should ensure that AI-systems will abide by Union law and ethical standards, will help to fight the climate emergency more effectively, to improve human well-being, notably with respect to, medical examinations and working conditions, to better integrate disabled persons into the society and to provide tailor-made education courses to all types of students. To exploit the various technological opportunities and to boost people’s trust in the use of AI-systems, while at the same time preventing harmful scenarios, sound ethical standards combined with solid and fair compensation

PE652.460v01-00 80/134 AM\1205914EN.docx

EN

Page 81: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

is the best way forward.

Or. en

Amendment 174Evelyne Gebhardt, Brando BenifeiMotion for a resolutionAnnex I – part B – recital 4

Motion for a resolution Amendment

(4) At this point, it is important to point out that the advantages of deploying AI-systems will by far outweigh the disadvantages. They will help to fight climate change more effectively, to improve medical examinations, to better integrate disabled persons into the society and to provide tailor-made education courses to all types of students. To exploit the various technological opportunities and to boost people’s trust in the use of AI-systems, while at the same time preventing harmful scenarios, sound ethical standards combined with solid and fair compensation is the best way forward.

(4) At this point, it is important to point out that to ensure that the advantages of deploying AI-systems outweigh the disadvantages, certain adjustments need to be made to the Union law. They will help to fight climate change more effectively, to improve medical examinations, to better integrate disabled persons into the society and to provide tailor-made education courses to all types of students. To exploit the various technological opportunities and to boost people’s trust in the use of AI-systems, while at the same time preventing harmful scenarios, it is essential to ensure that AI-systems comply with applicable laws and adjust Union and national liability regimes in order to guarantee solid and fair compensation for affected persons.

Or. en

Amendment 175Caterina ChinniciMotion for a resolutionAnnex I – part B – recital 4

Motion for a resolution Amendment

(4) At this point, it is important to point out that the advantages of deploying AI-systems will by far outweigh the disadvantages. They will help to fight climate change more effectively, to improve medical examinations, to better integrate disabled persons into the society and to provide tailor-made education courses to all types of students. To exploit

(4) At this point, it is important to point out that the advantages of deploying AI-systems will by far outweigh the disadvantages. They will help to fight climate change more effectively, to improve medical examinations, to better integrate disabled and ageing persons into the society and to provide tailor-made education courses to all types of students.

AM\1205914EN.docx 81/134 PE652.460v01-00

EN

Page 82: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

the various technological opportunities and to boost people’s trust in the use of AI-systems, while at the same time preventing harmful scenarios, sound ethical standards combined with solid and fair compensation is the best way forward.

To exploit the various technological opportunities and to boost people’s trust in the use of AI-systems, while at the same time preventing harmful scenarios, sound ethical standards combined with solid and fair compensation is the best way forward.

Or. en

Amendment 176Emmanuel Maurel, Manon AubryMotion for a resolutionAnnex I – part B – recital 4

Motion for a resolution Amendment

(4) At this point, it is important to point out that the advantages of deploying AI-systems will by far outweigh the disadvantages. They will help to fight climate change more effectively, to improve medical examinations, to better integrate disabled persons into the society and to provide tailor-made education courses to all types of students. To exploit the various technological opportunities and to boost people’s trust in the use of AI-systems, while at the same time preventing harmful scenarios, sound ethical standards combined with solid and fair compensation is the best way forward.

(4) It seems reasonable to expect that the advantages of deploying AI-systems will outweigh their disadvantages. They will help to fight climate change more effectively, to improve medical examinations, to better integrate disabled persons into the society and to provide tailor-made education courses to all types of students. To exploit the various technological opportunities and to boost people’s trust in the use of AI-systems, while at the same time preventing harmful scenarios, sound ethical standards combined with solid and fair compensation is the best way forward.

Or. fr

Amendment 177Liesje Schreinemacher, Adrián Vázquez Lázara, Javier Nart, Karen Melchior, Svenja HahnMotion for a resolutionAnnex I – part B – recital 4 a (new)

Motion for a resolution Amendment

(4a) An adequate liability regime is also necessary to counterweight the breach of safety rules. However, the envisaged liability needs to take into consideration all interests at stake. A careful examination of the consequences of any new regulatory framework on

PE652.460v01-00 82/134 AM\1205914EN.docx

EN

Page 83: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

small and medium-sized enterprises (SMEs) and start-ups is a prerequisite for further legislative steps. The crucial role that they play in the European economy justifies a strictly proportionate approach in order to enable them to develop and innovate. On the other hand, the victims of damages caused by AI-systems need to have a right to redress and full compensation of the damages and the harms that they have suffered.

Or. en

Amendment 178Caterina ChinniciMotion for a resolutionAnnex I – part B – recital 4 a (new)

Motion for a resolution Amendment

(4a) The most vulnerable actors in the context of AI use and application are the children, a possible comprehensive EU legal act concerning AI should therefore contain sound rules to ensure the protection of children and the safeguarding of  the rights of the children. 

Or. en

Amendment 179Evelyne Gebhardt, Brando BenifeiMotion for a resolutionAnnex I – part B – recital 5

Motion for a resolution Amendment

(5) Any discussion about required changes in the existing legal framework should start with the clarification that AI-systems have neither legal personality nor human conscience, and that their sole task is to serve humanity. Many AI-systems are also not so different from other technologies, which are sometimes based on even more complex software. Ultimately, the large majority of AI-

(5) Any discussion about required changes in the existing legal framework should start with the clarification that AI-systems have neither legal personality nor human conscience, and that their sole task is to serve humanity. Many AI-systems are also not so different from other technologies, which are sometimes based on even more complex software. Ultimately, the large majority of AI-

AM\1205914EN.docx 83/134 PE652.460v01-00

EN

Page 84: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

systems are used for handling trivial tasks without any risks for the society. There are however also AI-systems that are deployed in a critical manner and are based on neuronal networks and deep-learning processes. Their opacity and autonomy could make it very difficult to trace back specific actions to specific human decisions in their design or in their operation. A deployer of such an AI-system might for instance argue that the physical or virtual activity, device or process causing the harm or damage was outside of his or her control because it was caused by an autonomous operation of his or her AI-system. The mere operation of an autonomous AI-system should at the same time not be a sufficient ground for admitting the liability claim. As a result, there might be liability cases in which a person who suffers harm or damage caused by an AI-system cannot prove the fault of the producer, of an interfering third party or of the deployer and ends up without compensation.

systems are used for handling trivial tasks without any risks for the society. There are however also AI-systems that are deployed in a critical manner and are based on neuronal networks and deep-learning processes. Their opacity and autonomy could make it very difficult to trace back specific actions to specific human decisions in their design or in their operation. A frontend operator of such an AI-system might for instance argue that the physical or virtual activity, device or process causing the harm or damage was outside of his or her control because it was caused by an autonomous operation of his or her AI-system. The mere operation of an autonomous AI-system should at the same time not be a sufficient ground for admitting the liability claim. As a result, there might be liability cases in which a person who suffers harm or damage caused by an AI-system cannot prove the fault of the producer, the backend operator of an interfering third party or of the frontend operator and ends up without compensation. Furthermore, the allocation of liability could be unfair or inefficient. To prevent such scenarios, certain adjustments need to be made to Union and national liability regimes.

Or. en

Amendment 180Sergey LagodinskyMotion for a resolutionAnnex I – part B – recital 5

Motion for a resolution Amendment

(5) Any discussion about required changes in the existing legal framework should start with the clarification that AI-systems have neither legal personality nor human conscience, and that their sole task is to serve humanity. Many AI-systems are also not so different from other technologies, which are sometimes based on even more complex software.

(5) Any discussion about required changes in the existing legal framework should start with the clarification that AI-systems have neither legal personality nor human conscience, and that their sole task is to serve humanity. Many AI-systems are also not so different from other technologies, which are sometimes based on even more complex software, and

PE652.460v01-00 84/134 AM\1205914EN.docx

EN

Page 85: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

Ultimately, the large majority of AI-systems are used for handling trivial tasks without any risks for the society. There are however also AI-systems that are deployed in a critical manner and are based on neuronal networks and deep-learning processes. Their opacity and autonomy could make it very difficult to trace back specific actions to specific human decisions in their design or in their operation. A deployer of such an AI-system might for instance argue that the physical or virtual activity, device or process causing the harm or damage was outside of his or her control because it was caused by an autonomous operation of his or her AI-system. The mere operation of an autonomous AI-system should at the same time not be a sufficient ground for admitting the liability claim. As a result, there might be liability cases in which a person who suffers harm or damage caused by an AI-system cannot prove the fault of the producer, of an interfering third party or of the deployer and ends up without compensation.

developed by human intervention. Ultimately, the large majority of AI-systems are used for handling trivial tasks with minimum risks for the society. There are however also AI-systems that are deployed in a critical manner and are based on technologies such as neuronal networks and deep-learning processes. Their complexity and autonomy could make it difficult to trace back specific actions to specific human decisions in their design or in their operation. The person expected to be in control of such an AI-system, like the manufacturer or the operator, might for instance argue that the physical or virtual activity, device or process causing the harm or damage was outside of his or her control because it was caused by an autonomous operation of his or her AI-system. The mere operation of an autonomous AI-system should at the same time not be a sufficient ground for admitting the liability claim if not brought in connection with other elements supporting such a claim. As a result, there might be liability cases in which a person who suffers harm or damage caused by an AI-system cannot prove the fault of the producer, of an interfering third party or of the operator and ends up without compensation.

Or. en

Amendment 181Brando BenifeiMotion for a resolutionAnnex I – part B – recital 5

Motion for a resolution Amendment

(5) Any discussion about required changes in the existing legal framework should start with the clarification that AI-systems have neither legal personality nor human conscience, and that their sole task is to serve humanity. Many AI-systems are also not so different from other technologies, which are sometimes

(5) As a result of major technological advances of the last years, AI-systems have become able to perform activities which used to be exclusively human, but the development of certain autonomous and cognitive features, like the ability to learn from data and experience and take decisions, has made them more and more

AM\1205914EN.docx 85/134 PE652.460v01-00

EN

Page 86: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

based on even more complex software. Ultimately, the large majority of AI-systems are used for handling trivial tasks without any risks for the society. There are however also AI-systems that are deployed in a critical manner and are based on neuronal networks and deep-learning processes. Their opacity and autonomy could make it very difficult to trace back specific actions to specific human decisions in their design or in their operation. A deployer of such an AI-system might for instance argue that the physical or virtual activity, device or process causing the harm or damage was outside of his or her control because it was caused by an autonomous operation of his or her AI-system. The mere operation of an autonomous AI-system should at the same time not be a sufficient ground for admitting the liability claim. As a result, there might be liability cases in which a person who suffers harm or damage caused by an AI-system cannot prove the fault of the producer, of an interfering third party or of the deployer and ends up without compensation.

similar to agents that interact with their environment and are able to alter it significantly. In such a context, the legal responsibility arising through a robot’s harmful action becomes a crucial issue. This, in turn, puts into question the current rules on liability, making it necessary for new principles and rules to provide clarity on the legal liability of various actors concerning responsibility for the acts and omissions of AI-systems, as their opacity and autonomy could make it very difficult to trace back specifications to specific human decisions in their design or in their operation and to assess whether their acts or omissions causing harm or damage could have been avoided. A deployer of such an AI-system might for instance argue that the physical or virtual activity, device or process causing the harm or damage was outside of his or her control because it was caused by an autonomous operation of his or her AI-system. As a result, there might be liability cases in which a person who suffers harm or damage caused by an AI-system cannot prove the fault of the producer, of an interfering third party or of the deployer and ends up without compensation.

Or. en

Amendment 182Emmanuel Maurel, Manon AubryMotion for a resolutionAnnex I – part B – recital 5

Motion for a resolution Amendment

(5) Any discussion about required changes in the existing legal framework should start with the clarification that AI-systems have neither legal personality nor human conscience, and that their sole task is to serve humanity. Many AI-systems are also not so different from other technologies, which are sometimes based on even more complex software. Ultimately, the large majority of AI-

(5) Any discussion about required changes in the existing legal framework should start with the clarification that AI-systems have neither legal personality nor human conscience, and that their sole task is to serve humanity. Many AI-systems are also not so different from other technologies, which are sometimes based on even more complex software. Ultimately, the large majority of AI-

PE652.460v01-00 86/134 AM\1205914EN.docx

EN

Page 87: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

systems are used for handling trivial tasks without any risks for the society. There are however also AI-systems that are deployed in a critical manner and are based on neuronal networks and deep-learning processes. Their opacity and autonomy could make it very difficult to trace back specific actions to specific human decisions in their design or in their operation. A deployer of such an AI-system might for instance argue that the physical or virtual activity, device or process causing the harm or damage was outside of his or her control because it was caused by an autonomous operation of his or her AI-system. The mere operation of an autonomous AI-system should at the same time not be a sufficient ground for admitting the liability claim. As a result, there might be liability cases in which a person who suffers harm or damage caused by an AI-system cannot prove the fault of the producer, of an interfering third party or of the deployer and ends up without compensation.

systems are used for handling trivial tasks without any risks for the society. There are however also AI-systems that are deployed in a critical manner and are based on neuronal networks and deep-learning processes. Their opacity and autonomy could make it very difficult to trace back specific actions to specific human decisions in their design or in their operation. A deployer of such an AI-system might for instance argue that the physical or virtual activity, device or process causing the harm or damage was outside of his or her control because it was caused by an autonomous operation of his or her AI-system. As a result, there might be liability cases in which a person who suffers harm or damage caused by an AI-system cannot prove the fault of the producer, of an interfering third party or of the deployer and ends up without compensation.

Or. fr

Amendment 183Kosma ZłotowskiMotion for a resolutionAnnex I – part B – recital 5

Motion for a resolution Amendment

(5) Any discussion about required changes in the existing legal framework should start with the clarification that AI-systems have neither legal personality nor human conscience, and that their sole task is to serve humanity. Many AI-systems are also not so different from other technologies, which are sometimes based on even more complex software. Ultimately, the large majority of AI-systems are used for handling trivial tasks without any risks for the society. There are however also AI-systems that are deployed in a critical manner and are based on

(5) Any discussion about required changes in the existing legal framework should start with the clarification that AI-systems have neither legal personality nor human conscience, and that their sole task is to serve humanity. Many AI-systems are also not so different from other technologies, which are sometimes based on even more complex software. There are AI-systems that are deployed in a critical manner and are based on neuronal networks and deep-learning processes. Their opacity and autonomy could make it very difficult to trace back specific actions

AM\1205914EN.docx 87/134 PE652.460v01-00

EN

Page 88: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

neuronal networks and deep-learning processes. Their opacity and autonomy could make it very difficult to trace back specific actions to specific human decisions in their design or in their operation. A deployer of such an AI-system might for instance argue that the physical or virtual activity, device or process causing the harm or damage was outside of his or her control because it was caused by an autonomous operation of his or her AI-system. The mere operation of an autonomous AI-system should at the same time not be a sufficient ground for admitting the liability claim. As a result, there might be liability cases in which a person who suffers harm or damage caused by an AI-system cannot prove the fault of the producer, of an interfering third party or of the deployer and ends up without compensation.

to specific human decisions in their design or in their operation. A deployer of such an AI-system might for instance argue that the physical or virtual activity, device or process causing the harm or damage was outside of his or her control because it was caused by an autonomous operation of his or her AI-system. The mere operation of an autonomous AI-system should at the same time not be a sufficient ground for admitting the liability claim. As a result, there might be liability cases in which a person who suffers harm or damage caused by an AI-system cannot prove the fault of the producer, of an interfering third party or of the deployer and ends up without compensation.

Or. en

Amendment 184Liesje Schreinemacher, Adrián Vázquez Lázara, Javier Nart, Karen MelchiorMotion for a resolutionAnnex I – part B – recital 5

Motion for a resolution Amendment

(5) Any discussion about required changes in the existing legal framework should start with the clarification that AI-systems have neither legal personality nor human conscience, and that their sole task is to serve humanity. Many AI-systems are also not so different from other technologies, which are sometimes based on even more complex software. Ultimately, the large majority of AI-systems are used for handling trivial tasks without any risks for the society. There are however also AI-systems that are deployed in a critical manner and are based on neuronal networks and deep-learning processes. Their opacity and autonomy could make it very difficult to trace back specific actions to specific human

(5) Any discussion about required changes in the existing legal framework should start with the clarification that AI-systems have neither legal personality nor human conscience, and that their sole task is to serve humanity. Many AI-systems are also not so different from other technologies, which are sometimes based on even more complex software. Ultimately, the large majority of AI-systems are used for handling trivial tasks without any risks for the society. There are however also AI-systems that are developed and deployed in a critical manner and are based on neuronal networks and deep-learning processes. Their opacity and autonomy could make it very difficult to trace back specific actions

PE652.460v01-00 88/134 AM\1205914EN.docx

EN

Page 89: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

decisions in their design or in their operation. A deployer of such an AI-system might for instance argue that the physical or virtual activity, device or process causing the harm or damage was outside of his or her control because it was caused by an autonomous operation of his or her AI-system. The mere operation of an autonomous AI-system should at the same time not be a sufficient ground for admitting the liability claim. As a result, there might be liability cases in which a person who suffers harm or damage caused by an AI-system cannot prove the fault of the producer, of an interfering third party or of the deployer and ends up without compensation.

to specific human decisions in their design or in their operation. An operator of such an AI-system might for instance argue that the physical or virtual activity, device or process causing the harm or damage was outside of his or her control because it was caused by an autonomous operation of his or her AI-system. The mere operation of an autonomous AI-system should at the same time not be a sufficient ground for admitting the liability claim. As a result, there might be liability cases in which a person who suffers harm or damage caused by an AI-system cannot prove the fault of the producer, of an interfering third party or of the operator and ends up without compensation.

Or. en

Amendment 185Daniel BudaMotion for a resolutionAnnex I – part B – recital 6

Motion for a resolution Amendment

(6) Nevertheless, it should always be clear that whoever creates, maintains, controls or interferes with the AI-system, should be accountable for the harm or damage that the activity, device or process causes. This follows from general and widely accepted liability concepts of justice according to which the person that creates a risk for the public is accountable if that risk materializes. Consequently, the rise of AI-systems does not pose a need for a complete revision of liability rules throughout the Union. Specific adjustments of the existing legislation and very few new provisions would be sufficient to accommodate the AI-related challenges.

(6) Nevertheless, it should always be clear that whoever creates, maintains, controls or interferes with the AI-system, should be accountable for the harm or damage that the activity, device or process causes. This follows from general and widely accepted liability concepts of justice according to which the person that creates a risk for the public is accountable if that risk materializes. Consequently, the rise of AI-systems does not pose a need for a complete revision of liability rules throughout the Union. Specific adjustments of the existing legislation and very few new provisions would be sufficient to accommodate the AI-related challenges, with a view to preventing regulatory fragmentation and ensuring the harmonisation of Union civil liability legislation in connection with artificial intelligence.

AM\1205914EN.docx 89/134 PE652.460v01-00

EN

Page 90: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

Or. ro

Amendment 186Evelyne Gebhardt, Brando BenifeiMotion for a resolutionAnnex I – part B – recital 6

Motion for a resolution Amendment

(6) Nevertheless, it should always be clear that whoever creates, maintains, controls or interferes with the AI-system, should be accountable for the harm or damage that the activity, device or process causes. This follows from general and widely accepted liability concepts of justice according to which the person that creates a risk for the public is accountable if that risk materializes. Consequently, the rise of AI-systems does not pose a need for a complete revision of liability rules throughout the Union. Specific adjustments of the existing legislation and very few new provisions would be sufficient to accommodate the AI-related challenges.

(6) Thus, it should always be clear that whoever creates, maintains, controls or interferes with the AI-system, should be accountable for the harm or damage that the activity, device or process causes. Additionally, strict liability should lie with the person that has more control over the risks of the operation. This follows from general and widely accepted liability concepts of justice according to which the person that creates a risk for the public is accountable if that risk materializes. Consequently, the rise of AI-systems does not pose a need for a complete revision of liability rules throughout the Union. Specific adjustments of the existing legislation and the necessary new provisions would be sufficient to accommodate the AI-related challenges.

Or. en

Amendment 187Sergey LagodinskyMotion for a resolutionAnnex I – part B – recital 6

Motion for a resolution Amendment

(6) Nevertheless, it should always be clear that whoever creates, maintains, controls or interferes with the AI-system, should be accountable for the harm or damage that the activity, device or process causes. This follows from general and widely accepted liability concepts of

(6) Nevertheless, it should always be clear that whoever creates, maintains, controls or interferes with the AI-system, should be accountable for the harm or damage that the activity, device or process causes. This follows from general and widely accepted liability concepts of

PE652.460v01-00 90/134 AM\1205914EN.docx

EN

Page 91: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

justice according to which the person that creates a risk for the public is accountable if that risk materializes. Consequently, the rise of AI-systems does not pose a need for a complete revision of liability rules throughout the Union. Specific adjustments of the existing legislation and very few new provisions would be sufficient to accommodate the AI-related challenges.

justice according to which the person that creates or entertains a risk for the public is accountable if that risk materializes, and thus should ex-ante minimise or ex-post compensate that risk. Consequently, the rise of AI-systems does not pose a need for a complete revision of liability rules throughout the Union. Specific adjustments of the existing legislation and very few new provisions would be sufficient to accommodate the AI-related challenges.

Or. en

Amendment 188Liesje Schreinemacher, Adrián Vázquez Lázara, Javier Nart, Karen Melchior, Svenja HahnMotion for a resolutionAnnex I – part B – recital 6

Motion for a resolution Amendment

(6) Nevertheless, it should always be clear that whoever creates, maintains, controls or interferes with the AI-system, should be accountable for the harm or damage that the activity, device or process causes. This follows from general and widely accepted liability concepts of justice according to which the person that creates a risk for the public is accountable if that risk materializes. Consequently, the rise of AI-systems does not pose a need for a complete revision of liability rules throughout the Union. Specific adjustments of the existing legislation and very few new provisions would be sufficient to accommodate the AI-related challenges.

(6) Nevertheless, it should always be clear that whoever creates, maintains, controls or interferes with the AI-system, should be accountable for the harm or damage that the activity, device or process causes. This follows from general and widely accepted liability concepts of justice according to which the person that creates a risk for the public is accountable if that risk materializes. Consequently, the rise of AI-systems does not pose a need for a complete revision of liability rules throughout the Union. Specific adjustments of the existing legislation and well-assessed and targeted new provisions would be sufficient to accommodate the AI-related challenges.

Or. en

Amendment 189Brando BenifeiMotion for a resolutionAnnex I – part B – recital 6

Motion for a resolution Amendment

AM\1205914EN.docx 91/134 PE652.460v01-00

EN

Page 92: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

(6) Nevertheless, it should always be clear that whoever creates, maintains, controls or interferes with the AI-system, should be accountable for the harm or damage that the activity, device or process causes. This follows from general and widely accepted liability concepts of justice according to which the person that creates a risk for the public is accountable if that risk materializes. Consequently, the rise of AI-systems does not pose a need for a complete revision of liability rules throughout the Union. Specific adjustments of the existing legislation and very few new provisions would be sufficient to accommodate the AI-related challenges.

(6) Nevertheless, it should always be clear that whoever creates, maintains, controls or interferes with the AI-system, should be accountable for the harm or damage that the activity, device or process causes. This follows from general and widely accepted liability concepts of justice according to which the person that creates a risk for the public is accountable if that risk materializes. Consequently, the rise of AI-systems poses a need for a revision of liability rules throughout the Union. Adjustments of the existing legislation and new provisions would be necessary to adapt it to AI-related challenges.

Or. en

Amendment 190Kosma ZłotowskiMotion for a resolutionAnnex I – part B – recital 6

Motion for a resolution Amendment

(6) Nevertheless, it should always be clear that whoever creates, maintains, controls or interferes with the AI-system, should be accountable for the harm or damage that the activity, device or process causes. This follows from general and widely accepted liability concepts of justice according to which the person that creates a risk for the public is accountable if that risk materializes. Consequently, the rise of AI-systems does not pose a need for a complete revision of liability rules throughout the Union. Specific adjustments of the existing legislation and very few new provisions would be sufficient to accommodate the AI-related challenges.

(6) Nevertheless, it should always be clear that whoever creates, maintains, controls or interferes with the AI-system, should be accountable for the harm or damage that the activity, device or process causes. This follows from general and widely accepted liability concepts of justice according to which the person that creates a risk for the public is accountable if that risk materializes. However, the increasing and rapid emergence of AI-systems is linked to the need to revise liability rules throughout the Union. Specific adjustments of the existing legislation and very few new provisions would not be enough to accommodate the AI-related challenges.

Or. en

Amendment 191Sergey Lagodinsky

PE652.460v01-00 92/134 AM\1205914EN.docx

EN

Page 93: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

Motion for a resolutionAnnex I – part B – recital 7

Motion for a resolution Amendment

(7) Council Directive 85/374/EEC3 (the Product Liability Directive) has proven to be an effective means of getting compensation for damage triggered by a defective product. Hence, it should also be used with regard to civil liability claims of a party who suffers harm or damage against the producer of a defective AI-system. In line with the better regulation principles of the Union, any necessary legislative adjustments should be discussed during a review of that Directive. The existing fault-based liability law of the Member States also offers in most cases a sufficient level of protection for persons that suffer harm or damages caused by an interfering third person, as that interference regularly constitutes a fault-based action. Consequently, this Regulation should focus on claims against the deployer of an AI-system.

(7) Council Directive 85/374/EEC3 (the Product Liability Directive) has proven to be an effective means of getting compensation for damage triggered by a defective product. Hence, it should also be used with regard to civil liability claims of a party who suffers harm or damage against the producer of a defective AI-system. In line with the better regulation principles of the Union, any necessary legislative adjustments should be discussed during a review of that Directive. The existing fault-based liability law of the Member States also offers in most cases a sufficient level of protection for persons that suffer harm or damages caused by an interfering third person, as that interference regularly constitutes a fault-based action subject to situations where the third-party uses the AI system to cause harm. Consequently, this Regulation should focus on claims against the operator of an AI-system.

_________________ _________________3 Council Directive 85/374/EEC of 25 July 1985 on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products, OJ L 210, 7.8.1985, p. 29.

3 Council Directive 85/374/EEC of 25 July 1985 on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products, OJ L 210, 7.8.1985, p. 29.

Or. en

Amendment 192Marion WalsmannMotion for a resolutionAnnex I – part B – recital 7

Motion for a resolution Amendment

(7) Council Directive 85/374/EEC3 (the Product Liability Directive) has proven to be an effective means of getting

(7) Council Directive 85/374/EEC3 (the Product Liability Directive) has proven to be an effective means of getting

AM\1205914EN.docx 93/134 PE652.460v01-00

EN

Page 94: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

compensation for damage triggered by a defective product. Hence, it should also be used with regard to civil liability claims of a party who suffers harm or damage against the producer of a defective AI-system. In line with the better regulation principles of the Union, any necessary legislative adjustments should be discussed during a review of that Directive. The existing fault-based liability law of the Member States also offers in most cases a sufficient level of protection for persons that suffer harm or damages caused by an interfering third person, as that interference regularly constitutes a fault-based action. Consequently, this Regulation should focus on claims against the deployer of an AI-system.

compensation for damage triggered by a defective product. However, so that it can also be used with regard to civil liability claims of a party who suffers harm or damage against the producer of a defective AI-system, legislative adjustments during a review of that Directive are necessary. The existing fault-based liability law of the Member States also offers in most cases a sufficient level of protection for persons that suffer harm or damages caused by an interfering third person, as that interference regularly constitutes a fault-based action. Consequently, this Regulation should focus on claims against the deployer of an AI-system.

_________________ _________________3 Council Directive 85/374/EEC of 25 July 1985 on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products, OJ L 210, 7.8.1985, p. 29.

3 Council Directive 85/374/EEC of 25 July 1985 on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products, OJ L 210, 7.8.1985, p. 29.

Or. de

Amendment 193Liesje Schreinemacher, Adrián Vázquez Lázara, Javier Nart, Karen MelchiorMotion for a resolutionAnnex I – part B – recital 7

Motion for a resolution Amendment

(7) Council Directive 85/374/EEC3 (the Product Liability Directive) has proven to be an effective means of getting compensation for damage triggered by a defective product. Hence, it should also be used with regard to civil liability claims of a party who suffers harm or damage against the producer of a defective AI-system. In line with the better regulation principles of the Union, any necessary legislative adjustments should be discussed during a review of that Directive. The existing fault-based liability law of the

(7) Council Directive 85/374/EEC3 (the Product Liability Directive) has proven to be an effective means of getting compensation for damage triggered by a defective product. Hence, it should also be used with regard to civil liability claims of a party who suffers harm or damage against the producer of a defective AI-system. In line with the better regulation principles of the Union, any necessary legislative adjustments should be discussed during the review of that Directive. The existing fault-based liability law of the

PE652.460v01-00 94/134 AM\1205914EN.docx

EN

Page 95: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

Member States also offers in most cases a sufficient level of protection for persons that suffer harm or damages caused by an interfering third person, as that interference regularly constitutes a fault-based action. Consequently, this Regulation should focus on claims against the deployer of an AI-system.

Member States also offers in most cases a sufficient level of protection for persons that suffer harm or damages caused by an interfering third person, as that interference regularly constitutes a fault-based action. Consequently, this Regulation should focus on claims against the operator of an AI-system.

_________________ _________________3 Council Directive 85/374/EEC of 25 July 1985 on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products, OJ L 210, 7.8.1985, p. 29.

3 Council Directive 85/374/EEC of 25 July 1985 on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products, OJ L 210, 7.8.1985, p. 29.

Or. en

Amendment 194Evelyne Gebhardt, Brando Benifei, Sergey LagodinskyMotion for a resolutionAnnex I – part B – recital 7

Motion for a resolution Amendment

(7) Council Directive 85/374/EEC3 (the Product Liability Directive) has proven to be an effective means of getting compensation for damage triggered by a defective product. Hence, it should also be used with regard to civil liability claims of a party who suffers harm or damage against the producer of a defective AI-system. In line with the better regulation principles of the Union, any necessary legislative adjustments should be discussed during a review of that Directive. The existing fault-based liability law of the Member States also offers in most cases a sufficient level of protection for persons that suffer harm or damages caused by an interfering third person, as that interference regularly constitutes a fault-based action. Consequently, this Regulation should focus on claims against the deployer of an AI-system.

(7) Council Directive 85/374/EEC3 (the Product Liability Directive) has, for over 30 years, provided a valuable safety net to protect consumers from harm caused by defective products and needs to be updated to take account of civil liability claims of a party who suffers harm or damage against the producer of a defective AI-system. All necessary legislative adjustments should be discussed during a review of that Directive. The existing fault-based liability law of the Member States offers in most cases a sufficient level of protection for persons that suffer harm or damages caused by an interfering third person, but does not necessarily take account of technological developments. Consequently, this Regulation should focus on claims against the frontend operator and backend operator of an AI-system.

_________________ _________________

AM\1205914EN.docx 95/134 PE652.460v01-00

EN

Page 96: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

3 Council Directive 85/374/EEC of 25 July 1985 on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products, OJ L 210, 7.8.1985, p. 29.

3 Council Directive 85/374/EEC of 25 July 1985 on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products, OJ L 210, 7.8.1985, p. 29.

Or. en

Amendment 195Evelyne Gebhardt, Brando BenifeiMotion for a resolutionAnnex I – part B – recital 8

Motion for a resolution Amendment

(8) The liability of the deployer under this Regulation is based on the fact that he or she controls a risk by operating an AI-system. Comparable to an owner of a car or pet, the deployer is able to exercise a certain level of control over the risk that the item poses. Exercising control thereby should be understood as meaning any action of the deployer that affects the manner of the operation from start to finish or that changes specific functions or processes within the AI-system.

(8) The liability of the frontend operator under this Regulation is based on the fact that he or she is the person primarily deciding on and benefitting from the use of the relevant technology. Benefitting from the use thereby should be understood as meaning enjoying the features and functions of the AI-system. Conversely, the liability of the backend operator under this Regulation is based on the fact that he or she is the person continuously defining the features of the relevant technology and providing essential and ongoing backend support, therefore holding the actual control over the risks of the operation.

Or. en

Amendment 196Liesje Schreinemacher, Adrián Vázquez Lázara, Javier Nart, Karen MelchiorMotion for a resolutionAnnex I – part B – recital 8

Motion for a resolution Amendment

(8) The liability of the deployer under this Regulation is based on the fact that he or she controls a risk by operating an AI-system. Comparable to an owner of a car or pet, the deployer is able to exercise a certain level of control over the risk that the item poses. Exercising control thereby

(8) The liability of the operator under this Regulation is based on the fact that he or she exercises a degree of control over a risk connected to the operation of an AI-system, which is comparable to that of an owner of a car or pet. Exercising control thereby should be understood meaning any

PE652.460v01-00 96/134 AM\1205914EN.docx

EN

Page 97: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

should be understood as meaning any action of the deployer that affects the manner of the operation from start to finish or that changes specific functions or processes within the AI-system.

action of the operator that influences the operation of the AI-system and thus the extent to which it exposes third parties to its potential risks. These actions could impact the operation from start to finish by determining the input, output or results, or change specific functions or processes within the AI-system;

Or. en

Amendment 197Sergey LagodinskyMotion for a resolutionAnnex I – part B – recital 8

Motion for a resolution Amendment

(8) The liability of the deployer under this Regulation is based on the fact that he or she controls a risk by operating an AI-system. Comparable to an owner of a car or pet, the deployer is able to exercise a certain level of control over the risk that the item poses. Exercising control thereby should be understood as meaning any action of the deployer that affects the manner of the operation from start to finish or that changes specific functions or processes within the AI-system.

(8) The liability of the operator under this Regulation is based on the fact that he or she controls a risk by operating an AI-system. Comparable to an owner of a car or pet, the operator is able to exercise a certain level of control over the risk that the item poses. Exercising control thereby should be understood as meaning any access of the operator that can affect how the risk can materialise, such as the manner of the operation from start to finish or that can change specific functions or processes within the AI-system.

Or. en

Amendment 198Liesje Schreinemacher, Adrián Vázquez Lázara, Javier Nart, Karen MelchiorMotion for a resolutionAnnex I – part B – recital 8 a (new)

Motion for a resolution Amendment

(8a) The more sophisticated and more autonomous a system is, defining and influencing the algorithms, for example by continuous updates, could have a greater impact than just starting the system. As there is often more than one person who could, in a meaningful way, be considered as ‘operating’ the

AM\1205914EN.docx 97/134 PE652.460v01-00

EN

Page 98: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

technology, both the backend provider and the frontend operator can be qualified as the ‘operator’ of the AI-system, depending on the degree of the exercised control.

Or. en

Amendment 199Liesje Schreinemacher, Adrián Vázquez Lázara, Javier Nart, Karen MelchiorMotion for a resolutionAnnex I – part B – recital 8 b (new)

Motion for a resolution Amendment

(8b) Although in general, the frontend operator appears as the person who ‘primarily’ decides on the use of the AI-system, the backend provider, who on a continuous basis, defines the features of the technology and provides data and essential backend support service, could, for example, also have a high degree of control over the operational risks.

Or. en

Amendment 200Liesje Schreinemacher, Adrián Vázquez Lázara, Javier Nart, Karen MelchiorMotion for a resolutionAnnex I – part B – recital 8 c (new)

Motion for a resolution Amendment

(8c) When there is more than one operator, the strict liability should lie with the one who exercises the highest degree of control over the risks posed by the harmful operation.

Or. en

Amendment 201Sergey LagodinskyMotion for a resolutionAnnex I – part B – recital 9

Motion for a resolution Amendment

PE652.460v01-00 98/134 AM\1205914EN.docx

EN

Page 99: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

(9) If a user, namely the person that utilises the AI-system, is involved in the harmful event, he or she should only be liable under this Regulation if the user also qualifies as a deployer. This Regulation should not consider the backend operator, who is the person continuously defining the features of the relevant technology and providing essential and ongoing backend support, to be a deployer and thus, its provisions should not apply to him or her. For the purpose of legal certainty throughout the Union, the backend operator should fall under the same liability rules as the producer, manufacturer and developer.

(9) If a user, namely the person that utilises the AI-system, is involved in the harmful event, he or she should be liable under this Regulation if the user also qualifies as an operator, otherwise the extent of the user’s grossly negligent or intentional contribution to the risk will lead to the user’s fault-based liability to the claimant. Applicable consumer rights of the user should remain unaffected.

Or. en

Amendment 202Evelyne Gebhardt, Brando BenifeiMotion for a resolutionAnnex I – part B – recital 9

Motion for a resolution Amendment

(9) If a user, namely the person that utilises the AI-system, is involved in the harmful event, he or she should only be liable under this Regulation if the user also qualifies as a deployer. This Regulation should not consider the backend operator, who is the person continuously defining the features of the relevant technology and providing essential and ongoing backend support, to be a deployer and thus, its provisions should not apply to him or her. For the purpose of legal certainty throughout the Union, the backend operator should fall under the same liability rules as the producer, manufacturer and developer.

(9) If a user, namely the person that utilises the AI-system, is involved in the harmful event, he or she should only be liable under this Regulation if the user also qualifies as a frontend operator. It is appropriate to note that the backend operator, who is the person continuously defining the features of the relevant technology and providing essential and ongoing backend support, does not necessarily coincide with the producer, as it could also inter alia be the developer or the programmer.

Or. en

Amendment 203Liesje Schreinemacher, Adrián Vázquez Lázara, Javier Nart, Karen MelchiorMotion for a resolution

AM\1205914EN.docx 99/134 PE652.460v01-00

EN

Page 100: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

Annex I – part B – recital 9

Motion for a resolution Amendment

(9) If a user, namely the person that utilises the AI-system, is involved in the harmful event, he or she should only be liable under this Regulation if the user also qualifies as a deployer. This Regulation should not consider the backend operator, who is the person continuously defining the features of the relevant technology and providing essential and ongoing backend support, to be a deployer and thus, its provisions should not apply to him or her. For the purpose of legal certainty throughout the Union, the backend operator should fall under the same liability rules as the producer, manufacturer and developer.

(9) If a user, namely the person that utilises the AI-system, is involved in the harmful event, he or she should only be liable under this Regulation if the user also qualifies as an operator. For the purpose of legal certainty throughout the Union, if the backend operator is the producer or the manufacturer within the meaning of Article 3 of the Product Liability Directive, that Directive should apply to him or her. If there is only one operator, who is also the producer, this Regulation should prevail.

Or. en

Amendment 204Sergey LagodinskyMotion for a resolutionAnnex I – part B – recital 10

Motion for a resolution Amendment

(10) This Regulation should cover in principle all AI-systems, no matter where they are operating and whether the operations take place physically or virtually. The majority of liability claims under this Regulation should however address cases of third party liability, where an AI-system operates in a public space and exposes many third persons to a risk. In that situation, the affected persons will often not be aware of the operating AI-system and will not have any contractual or legal relationship towards the deployer. Consequently, the operation of the AI-system puts them into a situation in which, in the event of harm or damage being caused, they only have fault-based liability claims against the deployer of the AI-system, while facing

(10) This Regulation should cover in principle all AI-systems, no matter where they are operating and whether the operations take place physically or virtually. It should contribute to bring legal certainty, inasmuch as possible and without pre-empting on future technological development, the different liability claims that the affected persons can bring throughout the liability chain and throughout the lifecycle of an AI-system.

PE652.460v01-00 100/134 AM\1205914EN.docx

EN

Page 101: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

severe difficulties to prove fault on the part of the deployer.

Or. en

Amendment 205Evelyne Gebhardt, Brando BenifeiMotion for a resolutionAnnex I – part B – recital 10

Motion for a resolution Amendment

(10) This Regulation should cover in principle all AI-systems, no matter where they are operating and whether the operations take place physically or virtually. The majority of liability claims under this Regulation should however address cases of third party liability, where an AI-system operates in a public space and exposes many third persons to a risk. In that situation, the affected persons will often not be aware of the operating AI-system and will not have any contractual or legal relationship towards the deployer. Consequently, the operation of the AI-system puts them into a situation in which, in the event of harm or damage being caused, they only have fault-based liability claims against the deployer of the AI-system, while facing severe difficulties to prove fault on the part of the deployer.

(10) This Regulation should cover in principle all AI-systems, no matter where they are operating and whether the operations take place physically or virtually. This Regulation should also but not exclusively address cases of third party liability, where an AI-system operates in a public space and exposes many third persons to a risk. In that situation, the affected persons will often not be aware of the operating AI-system and will not have any contractual or legal relationship towards the frontend operator. Consequently, the operation of the AI-system puts them into a situation in which, in the event of harm or damage being caused, they face severe difficulties to prove fault on the part of the frontend operator.

Or. en

Amendment 206Emmanuel Maurel, Manon AubryMotion for a resolutionAnnex I – part B – recital 10

Motion for a resolution Amendment

(10) This Regulation should cover in principle all AI-systems, no matter where they are operating and whether the operations take place physically or virtually. The majority of liability claims under this Regulation should however address cases of third party liability, where

(10) This Regulation should cover in principle all AI-systems, no matter where they are operating and whether the operations take place physically or virtually. The majority of liability claims under this Regulation should however address cases of third party liability, where

AM\1205914EN.docx 101/134 PE652.460v01-00

EN

Page 102: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

an AI-system operates in a public space and exposes many third persons to a risk. In that situation, the affected persons will often not be aware of the operating AI-system and will not have any contractual or legal relationship towards the deployer. Consequently, the operation of the AI-system puts them into a situation in which, in the event of harm or damage being caused, they only have fault-based liability claims against the deployer of the AI-system, while facing severe difficulties to prove fault on the part of the deployer.

an AI-system exposes many third persons to a risk. In that situation, the affected persons will often not be aware of the operating AI-system and will not have any contractual or legal relationship towards the deployer. Consequently, the operation of the AI-system puts them into a situation in which, in the event of harm or damage being caused, they only have fault-based liability claims against the deployer of the AI-system, while facing severe difficulties to prove fault on the part of the deployer.

Or. fr

Amendment 207Liesje Schreinemacher, Adrián Vázquez Lázara, Javier Nart, Karen MelchiorMotion for a resolutionAnnex I – part B – recital 10

Motion for a resolution Amendment

(10) This Regulation should cover in principle all AI-systems, no matter where they are operating and whether the operations take place physically or virtually. The majority of liability claims under this Regulation should however address cases of third party liability, where an AI-system operates in a public space and exposes many third persons to a risk. In that situation, the affected persons will often not be aware of the operating AI-system and will not have any contractual or legal relationship towards the deployer. Consequently, the operation of the AI-system puts them into a situation in which, in the event of harm or damage being caused, they only have fault-based liability claims against the deployer of the AI-system, while facing severe difficulties to prove fault on the part of the deployer.

(10) This Regulation should cover in principle all AI-systems, no matter where they are operating and whether the operations take place physically or virtually. The majority of liability claims under this Regulation should however address cases of third party liability, where an AI-system operates in a public space and exposes many third persons to a risk. In that situation, the affected persons will often not be aware of the operating AI-system and will not have any contractual or legal relationship towards the operator. Consequently, the operation of the AI-system puts them into a situation in which, in the event of harm or damage being caused, they only have fault-based liability claims against the operator of the AI-system, while facing severe difficulties to prove fault on the part of the operator.

Or. en

Amendment 208Sabrina Pignedoli

PE652.460v01-00 102/134 AM\1205914EN.docx

EN

Page 103: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

Motion for a resolutionAnnex I – part B – recital 10 a (new)

Motion for a resolution Amendment

(10a) In order to protect consumers, ensure due transparency, analyse possible choices made by AI-systems and, lastly, determine the responsibility of the deployer, this Regulation should introduce a requirement to equip at least those AI-systems considered to be of high risk with a recorder similar to mandatory flight recorders for aeroplanes. Those recorders should be fully accessible to the public authority, allowing full verification of liability for damage or injury caused by an AI-system. Following an accident, serious incident or event identified by the investigating authority, the deployer of an AI-system should store the original data from those recorders for a period of 60 days or until otherwise agreed by the investigating authority. The deployer of an AI-system should carry out regular checks and operational assessments on the proper functioning of those recorders.

Or. it

Amendment 209Evelyne Gebhardt, Brando BenifeiMotion for a resolutionAnnex I – part B – recital 11

Motion for a resolution Amendment

(11) The type of AI-system the deployer is exercising control over is a determining factor. An AI-system that entails a high risk potentially endangers the public to a much higher degree and in a manner that is random and impossible to predict in advance. This means that at the start of the autonomous operation of the AI-system, the majority of the potentially affected persons are unknown and not identifiable (e.g. persons on a public square or in a neighbouring house),

(11) In the liability stage, a risk-based approach to AI is not appropriate, since the damage has occurred and the product has proven to be a risk product. It should be noted that so called low-risk applications may very well cause severe harm or damage. Thus, the liability model for products containing AI applications has to be approached in a two-step process: firstly providing a fault based liability of the frontend operator against which the affected person should have the

AM\1205914EN.docx 103/134 PE652.460v01-00

EN

Page 104: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

compared to the operation of an AI-system that involves specific persons, who have regularly consented to its deployment before (e.g. surgery in a hospital or sales demonstration in a small shop). Determining how significant the potential to cause harm or damage by a high-risk AI-system should depend on the interplay between the manner in which the AI-system is being used, the severity of the potential harm or damage and the likelihood that the risk materialises. The degree of severity should be determined based on the extent of the potential harm resulting from the operation, the number of affected persons, the total value for the potential damage as well as the harm to society as a whole. The likelihood should be determined based on the role of the algorithmic calculations in the decision-making process, the complexity of the decision and the reversibility of the effects. Ultimately, the manner of usage should depend, among other things, on the sector in which the AI-system operates, if it could have legal or factual effects on important legally protected rights of the affected person, and whether the effects can reasonably be avoided.

right to bring the claim for damages, with the possibility for the frontend operator to prove his lack of fault by complying with the duty of care consisting in the regular installation of all available updates. If this obligation is fulfilled, due diligence is presumed. Secondly, in the event where no fault of the frontend operator can be established, the backend operator should be held strictly liable. A two-step process is essential in order to ensure that victims are effectively compensated for damages caused by AI driven systems;

Or. en

Amendment 210Sergey LagodinskyMotion for a resolutionAnnex I – part B – recital 11

Motion for a resolution Amendment

(11) The type of AI-system the deployer is exercising control over is a determining factor. An AI-system that entails a high risk potentially endangers the public to a much higher degree and in a manner that is random and impossible to predict in advance. This means that at the start of the autonomous operation of the AI-system, the majority of the potentially affected persons are unknown and not identifiable

(11) The type of AI-system the operator is exercising control over is a determining factor. An AI-system that entails a high risk potentially endangers the user or the public to a much higher degree and in a manner that is difficult to predict in advance. This means that at the start of the autonomous operation of the AI-system, it is impossible to predict and probably to control the possible malicious behaviour

PE652.460v01-00 104/134 AM\1205914EN.docx

EN

Page 105: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

(e.g. persons on a public square or in a neighbouring house), compared to the operation of an AI-system that involves specific persons, who have regularly consented to its deployment before (e.g. surgery in a hospital or sales demonstration in a small shop). Determining how significant the potential to cause harm or damage by a high-risk AI-system should depend on the interplay between the manner in which the AI-system is being used, the severity of the potential harm or damage and the likelihood that the risk materialises. The degree of severity should be determined based on the extent of the potential harm resulting from the operation, the number of affected persons, the total value for the potential damage as well as the harm to society as a whole. The likelihood should be determined based on the role of the algorithmic calculations in the decision-making process, the complexity of the decision and the reversibility of the effects. Ultimately, the manner of usage should depend, among other things, on the sector in which the AI-system operates, if it could have legal or factual effects on important legally protected rights of the affected person, and whether the effects can reasonably be avoided.

of the software while its impact can be extremely high due to the extent of harm or the nature of the goods or rights that are exposed to the risk. Determining how significant the potential to cause harm or damage by a high-risk AI-system should depend on the interplay between, the purpose of use for which the AI system is put on the market, the manner in which the AI-system is being used, the severity of the potential harm or damage, the degree of autonomy of decision making that can result in harm. The degree of severity determining the level of compensation of the affected persons should be assessed based on the extent of the potential harm resulting from the operation, the number of affected persons, the total value, material and immaterial, for the potential damage as well as the harm to society as a whole. The likelihood for the harm or damage to occur should be determined based on the role of the algorithmic calculations in the decision-making process, the possibility of human intervention in this process, the complexity of the decision and the reversibility of the effects. Ultimately, the manner of usage should depend, among other things, on the context in which the AI-system operates, if it could have legal or factual effects on important legally protected rights and goods of the affected person, whether the effects can reasonably be avoided, and according to the level of information provided to the users.

Or. en

Amendment 211Liesje Schreinemacher, Adrián Vázquez Lázara, Javier Nart, Karen MelchiorMotion for a resolutionAnnex I – part B – recital 11

Motion for a resolution Amendment

(11) The type of AI-system the deployer is exercising control over is a determining factor. An AI-system that entails a high risk potentially endangers the public to a

(11) The type of AI-system the operator is exercising control over is a determining factor. An AI-system that entails a high risk potentially endangers the public to a

AM\1205914EN.docx 105/134 PE652.460v01-00

EN

Page 106: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

much higher degree and in a manner that is random and impossible to predict in advance. This means that at the start of the autonomous operation of the AI-system, the majority of the potentially affected persons are unknown and not identifiable (e.g. persons on a public square or in a neighbouring house), compared to the operation of an AI-system that involves specific persons, who have regularly consented to its deployment before (e.g. surgery in a hospital or sales demonstration in a small shop). Determining how significant the potential to cause harm or damage by a high-risk AI-system should depend on the interplay between the manner in which the AI-system is being used, the severity of the potential harm or damage and the likelihood that the risk materialises. The degree of severity should be determined based on the extent of the potential harm resulting from the operation, the number of affected persons, the total value for the potential damage as well as the harm to society as a whole. The likelihood should be determined based on the role of the algorithmic calculations in the decision-making process, the complexity of the decision and the reversibility of the effects. Ultimately, the manner of usage should depend, among other things, on the sector in which the AI-system operates, if it could have legal or factual effects on important legally protected rights of the affected person, and whether the effects can reasonably be avoided.

much higher degree and in a manner that is random and goes beyond what can reasonably be expected from its intended use. This means that at the start of the autonomous operation of the AI-system, the majority of the potentially affected persons are unknown and not identifiable (e.g. persons on a public square or in a neighbouring house), compared to the operation of an AI-system that involves specific persons, who have regularly consented to its deployment before (e.g. surgery in a hospital or sales demonstration in a small shop). Determining how significant the potential to cause harm or damage by a high-risk AI-system is dependent on the interplay between the manner in which the AI-system is being used, the severity of the potential harm or damage and the likelihood that the risk materialises. The degree of severity should be determined based on the extent of the potential harm resulting from the operation, the number of affected persons, the total value for the potential damage as well as the harm to society as a whole. The likelihood should be determined based on the role of the algorithmic calculations in the decision-making process, the complexity of the decision and the reversibility of the effects. Ultimately, the manner of usage should depend, among other things, on the sector in which the AI-system operates, if it could have legal or factual effects on important legally protected rights of the affected person, and whether the effects can reasonably be avoided.

Or. en

Amendment 212Marion WalsmannMotion for a resolutionAnnex I – part B – recital 12

Motion for a resolution Amendment

(12) All AI-systems with a high risk (12) The power to adopt delegated acts

PE652.460v01-00 106/134 AM\1205914EN.docx

EN

Page 107: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

should be listed in an Annex to this Regulation. Given the rapid technical and market developments as well as the technical expertise which is required for an adequate review of AI-systems, the power to adopt delegated acts in accordance with Article 290 of the Treaty on the Functioning of the European Union should be delegated to the Commission to amend this Regulation in respect of the types of AI-systems that pose a high risk and the critical sectors where they are used. Based on the definitions and provisions laid down in this Regulation, the Commission should review the Annex every six months and, if necessary, amend it by means of delegated acts. To give businesses enough planning and investment security, changes to the critical sectors should only be made every 12 months. Developers are called upon to notify the Commission if they are currently working on a new technology, product or service that falls under one of the existing critical sectors provided for in the Annex and which later could qualify for a high risk AI-system.

in accordance with Article 290 of the Treaty on the Functioning of the European Union should be delegated to the Commission so that it can draw up a list of the types of AI-systems that pose a high risk and the critical sectors where they are used. To give businesses enough planning and investment security, subsequent changes to the critical sectors should only be made every 12 months.

Or. de

Amendment 213Evelyne Gebhardt, Brando BenifeiMotion for a resolutionAnnex I – part B – recital 12

Motion for a resolution Amendment

(12) All AI-systems with a high risk should be listed in an Annex to this Regulation. Given the rapid technical and market developments as well as the technical expertise which is required for an adequate review of AI-systems, the power to adopt delegated acts in accordance with Article 290 of the Treaty on the Functioning of the European Union should be delegated to the Commission to amend this Regulation in respect of the types of AI-systems that

(12) A risk-based approach to AI within the existing liability framework might create unnecessary fragmentation across the Union creating legal uncertainty, interpretative issues and confusion amongst users who would face different levels of protection depending on whether the AI-systems is classified as high- or low- risk, which is something users cannot assess on their own. Once the damage has occurred it is irrelevant whether an AI system has been classified

AM\1205914EN.docx 107/134 PE652.460v01-00

EN

Page 108: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

pose a high risk and the critical sectors where they are used. Based on the definitions and provisions laid down in this Regulation, the Commission should review the Annex every six months and, if necessary, amend it by means of delegated acts. To give businesses enough planning and investment security, changes to the critical sectors should only be made every 12 months. Developers are called upon to notify the Commission if they are currently working on a new technology, product or service that falls under one of the existing critical sectors provided for in the Annex and which later could qualify for a high risk AI-system.

as high- or low-risks, what matters is that affected persons can obtain full compensation for the harm regardless of the risk category.

Or. en

Amendment 214Sergey LagodinskyMotion for a resolutionAnnex I – part B – recital 12

Motion for a resolution Amendment

(12) All AI-systems with a high risk should be listed in an Annex to this Regulation. Given the rapid technical and market developments as well as the technical expertise which is required for an adequate review of AI-systems, the power to adopt delegated acts in accordance with Article 290 of the Treaty on the Functioning of the European Union should be delegated to the Commission to amend this Regulation in respect of the types of AI-systems that pose a high risk and the critical sectors where they are used. Based on the definitions and provisions laid down in this Regulation, the Commission should review the Annex every six months and, if necessary, amend it by means of delegated acts. To give businesses enough planning and investment security, changes to the critical sectors should only be made every 12 months. Developers are called upon to notify the Commission if they are

(12) All categories of AI-systems with a high risk, as well as the criteria allowing to assess such risk, should be determined according to a structured process involving a Commission expert committee and a constant consultation between the Commission and all involved stakeholders, including researchers, individual experts, free software community members, scientists, engineers, as well as the competent supervisory authorities. Given the rapid technical and market developments worldwide, as well as the technical expertise which is required for an adequate review of AI-systems, a list of categories of AI-systems based on criteria able to define their autonomy and to assess the level of risk over time should be updated on a regular basis, while giving businesses and research organisations enough planning and investment security.

PE652.460v01-00 108/134 AM\1205914EN.docx

EN

Page 109: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

currently working on a new technology, product or service that falls under one of the existing critical sectors provided for in the Annex and which later could qualify for a high risk AI-system.

Or. en

Amendment 215Emmanuel Maurel, Manon AubryMotion for a resolutionAnnex I – part B – recital 12

Motion for a resolution Amendment

(12) All AI-systems with a high risk should be listed in an Annex to this Regulation. Given the rapid technical and market developments as well as the technical expertise which is required for an adequate review of AI-systems, the power to adopt delegated acts in accordance with Article 290 of the Treaty on the Functioning of the European Union should be delegated to the Commission to amend this Regulation in respect of the types of AI-systems that pose a high risk and the critical sectors where they are used. Based on the definitions and provisions laid down in this Regulation, the Commission should review the Annex every six months and, if necessary, amend it by means of delegated acts. To give businesses enough planning and investment security, changes to the critical sectors should only be made every 12 months. Developers are called upon to notify the Commission if they are currently working on a new technology, product or service that falls under one of the existing critical sectors provided for in the Annex and which later could qualify for a high risk AI-system.

(12) All AI-systems with a high risk should be listed, in a way which does not claim to be exhaustive, in an Annex to this Regulation. Given the rapid technical and market developments as well as the technical expertise which is required for an adequate review of AI-systems, the power to adopt delegated acts in accordance with Article 290 of the Treaty on the Functioning of the European Union should be delegated to the Commission to amend this Regulation in respect of the types of AI-systems that pose a high risk and the critical sectors where they are used. Based on the definitions and provisions laid down in this Regulation, the Commission should review the Annex every six months and, if necessary, amend it by means of delegated acts. Developers are called upon to notify the Commission if they are currently working on a new technology, product or service that falls under one of the existing critical sectors provided for in the Annex and which later could qualify for a high risk AI-system.

Or. fr

Amendment 216Daniel BudaMotion for a resolution

AM\1205914EN.docx 109/134 PE652.460v01-00

EN

Page 110: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

Annex I – part B – recital 12

Motion for a resolution Amendment

(12) All AI-systems with a high risk should be listed in an Annex to this Regulation. Given the rapid technical and market developments as well as the technical expertise which is required for an adequate review of AI-systems, the power to adopt delegated acts in accordance with Article 290 of the Treaty on the Functioning of the European Union should be delegated to the Commission to amend this Regulation in respect of the types of AI-systems that pose a high risk and the critical sectors where they are used. Based on the definitions and provisions laid down in this Regulation, the Commission should review the Annex every six months and, if necessary, amend it by means of delegated acts. To give businesses enough planning and investment security, changes to the critical sectors should only be made every 12 months. Developers are called upon to notify the Commission if they are currently working on a new technology, product or service that falls under one of the existing critical sectors provided for in the Annex and which later could qualify for a high risk AI-system.

(12) All AI-systems with a high risk should be exhaustively listed in an Annex to this Regulation. Given the rapid technical and market developments as well as the technical expertise which is required for an adequate review of AI-systems, the power to adopt delegated acts in accordance with Article 290 of the Treaty on the Functioning of the European Union should be delegated to the Commission to amend this Regulation in respect of the types of AI-systems that pose a high risk and the critical sectors where they are used. Based on the definitions and provisions laid down in this Regulation, the Commission should review the Annex every six months and, if necessary, amend it by means of delegated acts. To give businesses enough planning and investment security, changes to the critical sectors should only be made every 12 months. Developers are called upon to notify the Commission if they are currently working on a new technology, product or service that falls under one of the existing critical sectors provided for in the Annex and which later could qualify for a high risk AI-system.

Or. ro

Amendment 217Liesje Schreinemacher, Adrián Vázquez Lázara, Javier Nart, Karen MelchiorMotion for a resolutionAnnex I – part B – recital 12

Motion for a resolution Amendment

(12) All AI-systems with a high risk should be listed in an Annex to this Regulation. Given the rapid technical and market developments as well as the technical expertise which is required for an adequate review of AI-systems, the power to adopt delegated acts in accordance with

(12) All AI-systems with a high risk should be listed in an Annex to this Regulation. Given the rapid technical and market developments as well as the technical expertise which is required for an adequate review of AI-systems, the power to adopt delegated acts in accordance with

PE652.460v01-00 110/134 AM\1205914EN.docx

EN

Page 111: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

Article 290 of the Treaty on the Functioning of the European Union should be delegated to the Commission to amend this Regulation in respect of the types of AI-systems that pose a high risk and the critical sectors where they are used. Based on the definitions and provisions laid down in this Regulation, the Commission should review the Annex every six months and, if necessary, amend it by means of delegated acts. To give businesses enough planning and investment security, changes to the critical sectors should only be made every 12 months. Developers are called upon to notify the Commission if they are currently working on a new technology, product or service that falls under one of the existing critical sectors provided for in the Annex and which later could qualify for a high risk AI-system.

Article 290 of the Treaty on the Functioning of the European Union should be delegated to the Commission to amend this Regulation in respect of the types of AI-systems that pose a high risk and the critical sectors where they are used. Based on the definitions and provisions laid down in this Regulation, the Commission should review the Annex every six months and, if necessary, amend it by means of delegated acts. To give businesses enough planning and investment security, changes to the critical sectors should only be made every twelve months. Operators are called upon to notify the Commission if they are currently working on a new technology, product or service that falls under one of the existing critical sectors provided for in the Annex and which later could qualify for a high risk AI-system.

Or. en

Amendment 218Evelyne Gebhardt, Brando BenifeiMotion for a resolutionAnnex I – part B – recital 13

Motion for a resolution Amendment

(13) It is of particular importance that the Commission carry out appropriate consultations during its preparatory work, including at expert level, and that those consultations be conducted in accordance with the principles laid down in the Interinstitutional Agreement of 13 April 2016 on Better Law-Making4 . A standing committee called 'Technical Committee – high-risk AI-systems' (TCRAI) should support the Commission in its review under this Regulation. That standing committee should comprise representatives of the Member States as well as a balanced selection of stakeholders, including consumer organisation, businesses representatives from different sectors and sizes, as well as researchers and scientists. In particular,

deleted

AM\1205914EN.docx 111/134 PE652.460v01-00

EN

Page 112: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

to ensure equal participation in the preparation of delegated acts, the European Parliament and the Council receive all documents at the same time as Member States' experts, and their experts systematically have access to meetings of Commission expert groups as well as the standing TCRAI-committee, when dealing with the preparation of delegated acts.

_________________4 OJ L 123, 12.5.2016, p. 1.

Or. en

Amendment 219Sergey LagodinskyMotion for a resolutionAnnex I – part B – recital 13

Motion for a resolution Amendment

(13) It is of particular importance that the Commission carry out appropriate consultations during its preparatory work, including at expert level, and that those consultations be conducted in accordance with the principles laid down in the Interinstitutional Agreement of 13 April 2016 on Better Law-Making4 . A standing committee called 'Technical Committee – high-risk AI-systems' (TCRAI) should support the Commission in its review under this Regulation. That standing committee should comprise representatives of the Member States as well as a balanced selection of stakeholders, including consumer organisation, businesses representatives from different sectors and sizes, as well as researchers and scientists. In particular, to ensure equal participation in the preparation of delegated acts, the European Parliament and the Council receive all documents at the same time as Member States' experts, and their experts systematically have access to meetings of Commission expert groups as well as the standing TCRAI-

(13) It is of particular importance that the Commission carry out regular consultations of all relevant stakeholders during its preparatory work, including at expert level, and that those consultations be conducted in accordance with the principles laid down in the Interinstitutional Agreement of 13 April 2016 on Better Law-Making. A standing committee should support the Commission in its regular review under this Regulation. That standing committee should comprise representatives of the Member States as well as a balanced selection of stakeholders, including consumer organisation, associations representing the victims of harm or damage, businesses representatives from different sectors and sizes, as well as individual experts, researchers and scientists.

PE652.460v01-00 112/134 AM\1205914EN.docx

EN

Page 113: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

committee, when dealing with the preparation of delegated acts.

_________________4 OJ L 123, 12.5.2016, p. 1.

Or. en

Amendment 220Emmanuel MaurelMotion for a resolutionAnnex I – part B – recital 13

Motion for a resolution Amendment

(13) It is of particular importance that the Commission carry out appropriate consultations during its preparatory work, including at expert level, and that those consultations be conducted in accordance with the principles laid down in the Interinstitutional Agreement of 13 April 2016 on Better Law-Making4 . A standing committee called 'Technical Committee – high-risk AI-systems' (TCRAI) should support the Commission in its review under this Regulation. That standing committee should comprise representatives of the Member States as well as a balanced selection of stakeholders, including consumer organisation, businesses representatives from different sectors and sizes, as well as researchers and scientists. In particular, to ensure equal participation in the preparation of delegated acts, the European Parliament and the Council receive all documents at the same time as Member States' experts, and their experts systematically have access to meetings of Commission expert groups as well as the standing TCRAI-committee, when dealing with the preparation of delegated acts.

(13) It is of particular importance that the Commission carry out appropriate consultations during its preparatory work, including at expert level, and that those consultations be conducted in accordance with the principles laid down in the Interinstitutional Agreement of 13 April 2016 on Better Law-Making4 . Within the newly-established European Agency for Artificial Intelligence, a standing committee called 'Technical Committee – high-risk AI-systems' (TCRAI) should contribute to the review provided for in this Regulation. That standing committee should comprise representatives of the Member States as well as a balanced selection of stakeholders, including consumer organisation, businesses representatives from different sectors and sizes, as well as researchers and scientists. In particular, to ensure equal participation in the preparation of delegated acts, the European Parliament and the Council receive all documents at the same time as Member States' experts, and their experts systematically have access to meetings of Commission expert groups as well as the standing TCRAI-committee, when dealing with the preparation of delegated acts.

_________________ _________________4 OJ L 123, 12.5.2016, p. 1. 4 OJ L 123, 12.5.2016, p. 1.

Or. fr

AM\1205914EN.docx 113/134 PE652.460v01-00

EN

Page 114: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

Amendment 221Emmanuel Maurel, Manon AubryMotion for a resolutionAnnex I – part B – recital 14

Motion for a resolution Amendment

(14) In line with strict liability systems of the Member States, this Regulation should cover only harm or damage to life, health, physical integrity and property. For the same reason, it should determine the amount and extent of compensation, as well as the limitation period for bringing forward liability claims. In contrast to the Product Liability Directive, this Regulation should set out a significantly lower ceiling for compensation, as it only refers to a single operation of an AI-system, while the former refers to a number of products or even a product line with the same defect.

(14) In line with strict liability systems of the Member States, this Regulation should cover only harm or damage to life, health, physical, psychological and mental integrity and property.

Or. fr

Amendment 222Sergey LagodinskyMotion for a resolutionAnnex I – part B – recital 14

Motion for a resolution Amendment

(14) In line with strict liability systems of the Member States, this Regulation should cover only harm or damage to life, health, physical integrity and property. For the same reason, it should determine the amount and extent of compensation, as well as the limitation period for bringing forward liability claims. In contrast to the Product Liability Directive, this Regulation should set out a significantly lower ceiling for compensation, as it only refers to a single operation of an AI-system, while the former refers to a number of products or even a product line with the same defect.

(14) In line with strict liability systems of the Member States, this Regulation should cover harm or damage to life, health, physical integrity and property, including the affected person’s data and digital environment, as well as other ergo-omnes legal goods, provided their infringement can result into material damage.

PE652.460v01-00 114/134 AM\1205914EN.docx

EN

Page 115: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

Or. en

Amendment 223Liesje Schreinemacher, Adrián Vázquez Lázara, Javier NartMotion for a resolutionAnnex I – part B – recital 14

Motion for a resolution Amendment

(14) In line with strict liability systems of the Member States, this Regulation should cover only harm or damage to life, health, physical integrity and property. For the same reason, it should determine the amount and extent of compensation, as well as the limitation period for bringing forward liability claims. In contrast to the Product Liability Directive, this Regulation should set out a significantly lower ceiling for compensation, as it only refers to a single operation of an AI-system, while the former refers to a number of products or even a product line with the same defect.

(14) This Regulation should cover harm or damage to life, health, physical integrity, property and significant immaterial damage. For immaterial damage to be considered significant, the person must have suffered noticeable disadvantage and it had to be an objectively comprehensible impairment of personal interests with a substantial economic loss. The Regulation should also determine the amount and extent of compensation, as well as the limitation period for bringing forward liability claims. In contrast to the Product Liability Directive, this Regulation should set out a significantly lower ceiling for compensation, as it only refers to the damage of a single person resulting from a single operation of an AI-system, while the former refers to a number of products or even a product line with the same defect.

Or. en

Amendment 224Evelyne Gebhardt, Brando Benifei, Sergey LagodinskyMotion for a resolutionAnnex I – part B – recital 14

Motion for a resolution Amendment

(14) In line with strict liability systems of the Member States, this Regulation should cover only harm or damage to life, health, physical integrity and property. For the same reason, it should determine the amount and extent of compensation, as well as the limitation period for bringing forward liability claims. In contrast to the Product Liability Directive,

(14) Due to the special characteristics of AI-systems, the proposed Regulation should cover material as well as non-material harm, including damage to intangible property, and data, such as loss or leak of data and should ensure that damage is fully compensated in compliance with the fundamental right of

AM\1205914EN.docx 115/134 PE652.460v01-00

EN

Page 116: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

this Regulation should set out a significantly lower ceiling for compensation, as it only refers to a single operation of an AI-system, while the former refers to a number of products or even a product line with the same defect.

redress for damage suffered.

Or. en

Amendment 225Marion WalsmannMotion for a resolutionAnnex I – part B – recital 14

Motion for a resolution Amendment

(14) In line with strict liability systems of the Member States, this Regulation should cover only harm or damage to life, health, physical integrity and property. For the same reason, it should determine the amount and extent of compensation, as well as the limitation period for bringing forward liability claims. In contrast to the Product Liability Directive, this Regulation should set out a significantly lower ceiling for compensation, as it only refers to a single operation of an AI-system, while the former refers to a number of products or even a product line with the same defect.

(14) In line with strict liability systems of the Member States, this Regulation should cover only harm or damage to life, health, physical integrity and property. For the same reason, it should determine the limitation period for bringing forward liability claims. In contrast to the Product Liability Directive, this Regulation should set out a significantly lower ceiling for compensation, as it only refers to a single operation of an AI-system, while the former refers to a number of products or even a product line with the same defect.

Or. de

Amendment 226Evelyne Gebhardt, Brando BenifeiMotion for a resolutionAnnex I – part B – recital 15

Motion for a resolution Amendment

(15) All physical or virtual activities, devices or processes driven by AI-systems that are not listed as a high-risk AI-system in the Annex to this Regulation should remain subject to fault-based liability. The national laws of the Member States, including any relevant jurisprudence, with regard to the amount and extent of compensation as well as the

deleted

PE652.460v01-00 116/134 AM\1205914EN.docx

EN

Page 117: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

limitation period should continue to apply. A person who suffers harm or damage caused by an AI-system should however benefit from the presumption of fault of the deployer.

Or. en

Amendment 227Sergey LagodinskyMotion for a resolutionAnnex I – part B – recital 15

Motion for a resolution Amendment

(15) All physical or virtual activities, devices or processes driven by AI-systems that are not listed as a high-risk AI-system in the Annex to this Regulation should remain subject to fault-based liability. The national laws of the Member States, including any relevant jurisprudence, with regard to the amount and extent of compensation as well as the limitation period should continue to apply. A person who suffers harm or damage caused by an AI-system should however benefit from the presumption of fault of the deployer.

(15) All physical or virtual activities, devices or processes driven by AI-systems that are not qualified as posing a high-risk, under criteria to be determined through a clear process under the auspices of the Commission, should be subject to fault-based liability, notwithstanding stricter national laws and consumer protection legislation in force. The national laws of the Member States, including any relevant jurisprudence, with regard to the amount and extent of compensation as well as the limitation period should continue to apply. A person who suffers harm or damage caused by an AI-system needs to prove that the claimed harm has been caused by the AI, but should benefit from the presumption of fault of the operator.

Or. en

Amendment 228Emmanuel Maurel, Manon AubryMotion for a resolutionAnnex I – part B – recital 15

Motion for a resolution Amendment

(15) All physical or virtual activities, devices or processes driven by AI-systems that are not listed as a high-risk AI-system in the Annex to this Regulation should remain subject to fault-based liability. The national laws of the Member States,

(15) All physical or virtual activities, devices or processes driven by AI-systems that are not listed as a high-risk AI-system in the Annex to this Regulation should remain subject to fault-based liability, unless the Court of Justice of the

AM\1205914EN.docx 117/134 PE652.460v01-00

EN

Page 118: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

including any relevant jurisprudence, with regard to the amount and extent of compensation as well as the limitation period should continue to apply. A person who suffers harm or damage caused by an AI-system should however benefit from the presumption of fault of the deployer.

European Union decides otherwise. The national laws of the Member States, including their relevant jurisprudence, with regard to the amount and extent of compensation as well as the limitation period should continue to apply. A person who suffers harm or damage caused by an AI-system should however benefit from the presumption of fault of the deployer.

Or. fr

Amendment 229Liesje Schreinemacher, Adrián Vázquez Lázara, Javier Nart, Karen MelchiorMotion for a resolutionAnnex I – part B – recital 15

Motion for a resolution Amendment

(15) All physical or virtual activities, devices or processes driven by AI-systems that are not listed as a high-risk AI-system in the Annex to this Regulation should remain subject to fault-based liability. The national laws of the Member States, including any relevant jurisprudence, with regard to the amount and extent of compensation as well as the limitation period should continue to apply. A person who suffers harm or damage caused by an AI-system should however benefit from the presumption of fault of the deployer.

(15) All physical or virtual activities, devices or processes driven by AI-systems that are not listed as a high-risk AI-system in the Annex to this Regulation should remain subject to fault-based liability. The national laws of the Member States, including any relevant jurisprudence, with regard to the amount and extent of compensation as well as the limitation period should continue to apply. A person who suffers harm or damage caused by an AI-system should however benefit from the presumption of fault of the operator.

Or. en

Amendment 230Sergey Lagodinsky, Evelyne GebhardtMotion for a resolutionAnnex I – part B – recital 16

Motion for a resolution Amendment

(16) The diligence which can be expected from a deployer should be commensurate with (i) the nature of the AI system, (ii) the legally protected right potentially affected, (iii) the potential harm or damage the AI-system could cause and (iv) the likelihood of such damage.

(16) The diligence which can be expected from an operator should be commensurate with (i) the nature of the AI system, (ii) the information on the nature of the AI system provided to the operator and to the public, (iii) the legally protected right potentially affected, (iv) the potential

PE652.460v01-00 118/134 AM\1205914EN.docx

EN

Page 119: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

Thereby, it should be taken into account that the deployer might have limited knowledge of the algorithms and data used in the AI-system. It should be presumed that the deployer has observed due care in selecting a suitable AI-system, if the deployer has selected an AI-system which has been certified under [the voluntary certification scheme envisaged on p. 24 of COM(2020) 65 final]. It should be presumed that the deployer has observed due care during the operation of the AI-system, if the deployer can prove to have actually and regularly monitored the AI-system during its operation and to have notified the manufacturer about potential irregularities during the operation. It should be presumed that the deployer has observed due care as regards maintaining the operational reliability, if the deployer installed all available updates provided by the producer of the AI-system.

harm or damage the AI-system could cause and (v) the likelihood of such damage. Thereby, it should be taken into account that the operator might have limited knowledge of the algorithms and data used in the AI-system, even though a sufficient level of information should be ensured, providing for the relevant documentation on the use and design instructions, including the source code and the data used by the AI system, made easily accessible through a mandatory legal deposit. It should be presumed that the operator has observed due care in selecting a suitable AI-system, if the operator has selected an AI-system which has been certified under [the voluntary certification scheme envisaged on p. 24 of COM(2020) 65 final]. It should be presumed that the operator has observed due care during the operation of the AI-system, if the operator can prove to have actually and regularly monitored the AI-system during its operation and to have notified the manufacturer about potential irregularities during the operation. It should be presumed that the operator has observed due care as regards maintaining the operational reliability, if the operator installed all available updates provided by the producer of the AI-system according to the conditions laid down in Directive (EU) 2019/770. Due account should be given to the possibility for the largely volunteer-based free software community to produce software for the general public to use which can be integrated, in whole or in part, into AI systems, without automatically becoming subject to obligations designed for businesses providing digital content in a professional capacity.

Or. en

Amendment 231Evelyne Gebhardt, Brando BenifeiMotion for a resolutionAnnex I – part B – recital 16

AM\1205914EN.docx 119/134 PE652.460v01-00

EN

Page 120: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

Motion for a resolution Amendment

(16) The diligence which can be expected from a deployer should be commensurate with (i) the nature of the AI system, (ii) the legally protected right potentially affected, (iii) the potential harm or damage the AI-system could cause and (iv) the likelihood of such damage. Thereby, it should be taken into account that the deployer might have limited knowledge of the algorithms and data used in the AI-system. It should be presumed that the deployer has observed due care in selecting a suitable AI-system, if the deployer has selected an AI-system which has been certified under [the voluntary certification scheme envisaged on p. 24 of COM(2020) 65 final]. It should be presumed that the deployer has observed due care during the operation of the AI-system, if the deployer can prove to have actually and regularly monitored the AI-system during its operation and to have notified the manufacturer about potential irregularities during the operation. It should be presumed that the deployer has observed due care as regards maintaining the operational reliability, if the deployer installed all available updates provided by the producer of the AI-system.

(16) The diligence which can be expected from a frontend operator should be commensurate with (i) the nature of the AI system, (ii) the legally protected right potentially affected, (iii) the potential harm or damage the AI-system could cause and (iv) the likelihood of such damage. Thereby, it should be taken into account that the frontend operator might have limited knowledge of the algorithms and data used in the AI-system. It should be presumed that the frontend operator has observed due care in selecting a suitable AI-system, if the frontend operator has selected an AI-system which has been certified under [the voluntary certification scheme envisaged on p. 24 of COM(2020) 65 final]. It should be presumed that the frontend operator has observed due care during the operation of the AI-system, if the frontend operator can prove to have actually and regularly monitored the AI-system during its operation and to have notified the backend operator about potential irregularities during the operation. It should be presumed that the frontend operator has observed due care as regards maintaining the operational reliability, if the frontend operator installed all available updates provided by the backend operator of the AI-system.

Or. en

Amendment 232Liesje Schreinemacher, Adrián Vázquez Lázara, Javier Nart, Karen MelchiorMotion for a resolutionAnnex I – part B – recital 16

Motion for a resolution Amendment

(16) The diligence which can be expected from a deployer should be commensurate with (i) the nature of the AI system, (ii) the legally protected right potentially affected, (iii) the potential harm

(16) The diligence which can be expected from an operator should be commensurate with (i) the nature of the AI system, (ii) the legally protected right potentially affected, (iii) the potential harm

PE652.460v01-00 120/134 AM\1205914EN.docx

EN

Page 121: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

or damage the AI-system could cause and (iv) the likelihood of such damage. Thereby, it should be taken into account that the deployer might have limited knowledge of the algorithms and data used in the AI-system. It should be presumed that the deployer has observed due care in selecting a suitable AI-system, if the deployer has selected an AI-system which has been certified under [the voluntary certification scheme envisaged on p. 24 of COM(2020) 65 final]. It should be presumed that the deployer has observed due care during the operation of the AI-system, if the deployer can prove to have actually and regularly monitored the AI-system during its operation and to have notified the manufacturer about potential irregularities during the operation. It should be presumed that the deployer has observed due care as regards maintaining the operational reliability, if the deployer installed all available updates provided by the producer of the AI-system.

or damage the AI-system could cause and (iv) the likelihood of such damage. Thereby, it should be taken into account that the operator might have limited knowledge of the algorithms and data used in the AI-system. It should be presumed that the operator has observed due care in selecting a suitable AI-system, if the operator has selected an AI-system which has been certified under [the voluntary certification scheme envisaged on p. 24 of COM(2020) 65 final]. It should be presumed that the operator has observed due care during the operation of the AI-system, if the operator can prove to have actually and regularly monitored the AI-system during its operation and to have notified the manufacturer about potential irregularities during the operation. It should be presumed that the operator has observed due care as regards maintaining the operational reliability, if the operator installed all available updates provided by the producer of the AI-system.

Or. en

Amendment 233Liesje Schreinemacher, Adrián Vázquez Lázara, Javier Nart, Karen MelchiorMotion for a resolutionAnnex I – part B – recital 17

Motion for a resolution Amendment

(17) In order to enable the deployer to prove that he or she was not at fault, the producers should have the duty to collaborate with the deployer. European as well as non-European producers should furthermore have the obligation to designate an AI-liability-representative within the Union as a contact point for replying to all requests from deployers, taking similar provisions set out in Article 37 GDPR (data protection officers), Articles 3(41) and 13(4) of Regulation 2018/858 of the European Parliament and of the Council5 and Articles 4(2) and 5 of Regulation 2019/1020 of the European

(17) In order to enable the operator to prove that he or she was not at fault, or the affected person to prove the existence of fault, the producers should have the duty to collaborate with both parties concerned. European as well as non-European producers should furthermore have the obligation to designate an AI-liability-representative within the Union as a contact point for replying to all requests from operators, taking similar provisions set out in Article 37 GDPR (data protection officers), Articles 3(41) and 13(4) of Regulation 2018/858 of the European Parliament and of the Council5 and Articles

AM\1205914EN.docx 121/134 PE652.460v01-00

EN

Page 122: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

Parliament and of the Council6 (manufacturer's representative) into account.

4(2) and 5 of Regulation 2019/1020 of the European Parliament and of the Council6 (manufacturer's representative) into account.

_________________ _________________5 Regulation (EU) 2018/858 of the European Parliament and of the Council of 30 May 2018 on the approval and market surveillance of motor vehicles and their trailers, and of systems, components and separate technical units intended for such vehicles, amending Regulations (EC) No 715/2007 and (EC) No 595/2009 and repealing Directive 2007/46/EC (OJ L 151, 14.6.2018, p. 1).

5 Regulation (EU) 2018/858 of the European Parliament and of the Council of 30 May 2018 on the approval and market surveillance of motor vehicles and their trailers, and of systems, components and separate technical units intended for such vehicles, amending Regulations (EC) No 715/2007 and (EC) No 595/2009 and repealing Directive 2007/46/EC (OJ L 151, 14.6.2018, p. 1).

6 Regulation (EU) 2019/1020 of the European Parliament and of the Council of 20 June 2019 on market surveillance and compliance of products and amending Directive 2004/42/EC and Regulations (EC) No 765/2008 and (EU) No 305/2011 (OJ L 169, 25.6.2019, p. 1).

6 Regulation (EU) 2019/1020 of the European Parliament and of the Council of 20 June 2019 on market surveillance and compliance of products and amending Directive 2004/42/EC and Regulations (EC) No 765/2008 and (EU) No 305/2011 (OJ L 169, 25.6.2019, p. 1).

Or. en

Amendment 234Sergey LagodinskyMotion for a resolutionAnnex I – part B – recital 17

Motion for a resolution Amendment

(17) In order to enable the deployer to prove that he or she was not at fault, the producers should have the duty to collaborate with the deployer. European as well as non-European producers should furthermore have the obligation to designate an AI-liability-representative within the Union as a contact point for replying to all requests from deployers, taking similar provisions set out in Article 37 GDPR (data protection officers), Articles 3(41) and 13(4) of Regulation 2018/858 of the European Parliament and of the Council5 and Articles 4(2) and 5 of Regulation 2019/1020 of the European

(17) In order to enable the operator to prove that he or she was not at fault, the producers should have the duty to collaborate with the operator, including by providing well-documented information. European as well as non-European producers should furthermore have the obligation to designate an AI-liability-representative within the Union as a contact point for replying to all requests from operators, taking similar provisions set out in Article 37 GDPR (data protection officers), Articles 3(41) and 13(4) of Regulation 2018/858 of the European Parliament and of the Council and Articles

PE652.460v01-00 122/134 AM\1205914EN.docx

EN

Page 123: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

Parliament and of the Council6 (manufacturer's representative) into account.

4(2) and 5 of Regulation 2019/1020 of the European Parliament and of the Council(manufacturer's representative) into account.

_________________5 Regulation (EU) 2018/858 of the European Parliament and of the Council of 30 May 2018 on the approval and market surveillance of motor vehicles and their trailers, and of systems, components and separate technical units intended for such vehicles, amending Regulations (EC) No 715/2007 and (EC) No 595/2009 and repealing Directive 2007/46/EC (OJ L 151, 14.6.2018, p. 1).6 Regulation (EU) 2019/1020 of the European Parliament and of the Council of 20 June 2019 on market surveillance and compliance of products and amending Directive 2004/42/EC and Regulations (EC) No 765/2008 and (EU) No 305/2011 (OJ L 169, 25.6.2019, p. 1).

Or. en

Amendment 235Evelyne Gebhardt, Brando BenifeiMotion for a resolutionAnnex I – part B – recital 17

Motion for a resolution Amendment

(17) In order to enable the deployer to prove that he or she was not at fault, the producers should have the duty to collaborate with the deployer. European as well as non-European producers should furthermore have the obligation to designate an AI-liability-representative within the Union as a contact point for replying to all requests from deployers, taking similar provisions set out in Article 37 GDPR (data protection officers), Articles 3(41) and 13(4) of Regulation 2018/858 of the European Parliament and of the Council5 and Articles 4(2) and 5 of Regulation 2019/1020 of the European

(17) In order to enable the frontend operator to prove that he or she was not at fault, the backend operators should have the duty to collaborate with the frontend operator. European as well as non-European producers should furthermore have the obligation to designate an AI-liability-representative within the Union as a contact point for replying to all requests from frontend operators, taking similar provisions set out in Article 37 GDPR (data protection officers), Articles 3(41) and 13(4) of Regulation 2018/858 of the European Parliament and of the Council5 and Articles 4(2) and 5 of Regulation

AM\1205914EN.docx 123/134 PE652.460v01-00

EN

Page 124: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

Parliament and of the Council6 (manufacturer's representative) into account.

2019/1020 of the European Parliament and of the Council6 (manufacturer's representative) into account.

_________________ _________________5 Regulation (EU) 2018/858 of the European Parliament and of the Council of 30 May 2018 on the approval and market surveillance of motor vehicles and their trailers, and of systems, components and separate technical units intended for such vehicles, amending Regulations (EC) No 715/2007 and (EC) No 595/2009 and repealing Directive 2007/46/EC (OJ L 151, 14.6.2018, p. 1).

5 Regulation (EU) 2018/858 of the European Parliament and of the Council of 30 May 2018 on the approval and market surveillance of motor vehicles and their trailers, and of systems, components and separate technical units intended for such vehicles, amending Regulations (EC) No 715/2007 and (EC) No 595/2009 and repealing Directive 2007/46/EC (OJ L 151, 14.6.2018, p. 1).

6 Regulation (EU) 2019/1020 of the European Parliament and of the Council of 20 June 2019 on market surveillance and compliance of products and amending Directive 2004/42/EC and Regulations (EC) No 765/2008 and (EU) No 305/2011 (OJ L 169, 25.6.2019, p. 1).

6 Regulation (EU) 2019/1020 of the European Parliament and of the Council of 20 June 2019 on market surveillance and compliance of products and amending Directive 2004/42/EC and Regulations (EC) No 765/2008 and (EU) No 305/2011 (OJ L 169, 25.6.2019, p. 1).

Or. en

Amendment 236Emmanuel Maurel, Manon AubryMotion for a resolutionAnnex I – part B – recital 18

Motion for a resolution Amendment

(18) The legislator has to consider the liability risks connected to AI-systems during their whole lifecycle, from development to usage to end of life. The inclusion of AI-systems in a product or service represents a financial risk for businesses and consequently will have a heavy impact on the ability and options for small and medium-sized enterprises (SME) as well as for start-ups in relation to insuring and financing their projects based on new technologies. The purpose of liability is, therefore, not only to safeguard important legally protected rights of individuals but also a factor which determines whether businesses,

(18) The legislator has to consider the liability risks connected to AI-systems during their whole lifecycle, from development to usage to end of life. The inclusion of AI-systems in a product or service represents a financial risk for businesses and consequently will have a heavy impact on the ability and options for small and medium-sized enterprises (SME) as well as for start-ups in relation to insuring and financing their projects based on new technologies.

PE652.460v01-00 124/134 AM\1205914EN.docx

EN

Page 125: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

especially SMEs and start-ups, are able to raise capital, innovate and ultimately offer new products and services, as well as whether the customers are willing to use such products and services despite the potential risks and legal claims being brought against them.

Or. fr

Amendment 237Sergey LagodinskyMotion for a resolutionAnnex I – part B – recital 18

Motion for a resolution Amendment

(18) The legislator has to consider the liability risks connected to AI-systems during their whole lifecycle, from development to usage to end of life. The inclusion of AI-systems in a product or service represents a financial risk for businesses and consequently will have a heavy impact on the ability and options for small and medium-sized enterprises (SME) as well as for start-ups in relation to insuring and financing their projects based on new technologies. The purpose of liability is, therefore, not only to safeguard important legally protected rights of individuals but also a factor which determines whether businesses, especially SMEs and start-ups, are able to raise capital, innovate and ultimately offer new products and services, as well as whether the customers are willing to use such products and services despite the potential risks and legal claims being brought against them.

(18) The legislator has to consider the liability risks connected to AI-systems during their whole lifecycle, from development to usage to end of life, including the waste and recycling management. The inclusion of AI-systems in a product or service represents a financial risk for businesses and consequently will have a heavy impact on the ability and options for small and medium-sized enterprises (SME) as well as for start-ups in relation to insuring and financing their Research and Development projects based on new technologies. The purpose of liability is, therefore, not only to safeguard important legally protected rights of individuals but also a factor which determines whether businesses, especially SMEs and start-ups, are able to raise capital, innovate, research, and ultimately offer new products and services contributing to the well-being of society, as well as whether the customers trust in such products and services and are willing to use them despite the potential risks and legal claims being brought against them. In consideration of the dynamic nature of the risk pertaining to AI systems, to which the whole public is potentially exposed, civil liability rules should be governed by a high-level of protection of potentially

AM\1205914EN.docx 125/134 PE652.460v01-00

EN

Page 126: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

affected persons and public goods. 

Or. en

Amendment 238Liesje Schreinemacher, Adrián Vázquez Lázara, Javier NartMotion for a resolutionAnnex I – part B – recital 18

Motion for a resolution Amendment

(18) The legislator has to consider the liability risks connected to AI-systems during their whole lifecycle, from development to usage to end of life. The inclusion of AI-systems in a product or service represents a financial risk for businesses and consequently will have a heavy impact on the ability and options for small and medium-sized enterprises (SME) as well as for start-ups in relation to insuring and financing their projects based on new technologies. The purpose of liability is, therefore, not only to safeguard important legally protected rights of individuals but also a factor which determines whether businesses, especially SMEs and start-ups, are able to raise capital, innovate and ultimately offer new products and services, as well as whether the customers are willing to use such products and services despite the potential risks and legal claims being brought against them.

(18) The legislator has to consider the liability risks connected to AI-systems during their whole lifecycle, from development to usage to end of life. The inclusion of AI-systems in a product or service represents a financial risk for businesses and consequently will have a heavy impact on the ability and options for SMEs as well as for start-ups in relation to insuring and financing their projects based on new technologies. The purpose of liability is, therefore, not only to safeguard important legally protected rights of individuals but also a factor which determines whether businesses, especially SMEs and start-ups, are able to raise capital, innovate and ultimately offer new products and services, as well as whether the customers are willing to use such products and services despite the potential risks and legal claims being brought against them.

Or. en

Amendment 239Evelyne Gebhardt, Brando BenifeiMotion for a resolutionAnnex I – part B – recital 18

Motion for a resolution Amendment

(18) The legislator has to consider the liability risks connected to AI-systems during their whole lifecycle, from development to usage to end of life. The inclusion of AI-systems in a product or

(18) The legislator has to consider the liability risks connected to AI-systems during their whole lifecycle, from development to usage to end of life. The inclusion of AI-systems in a product or

PE652.460v01-00 126/134 AM\1205914EN.docx

EN

Page 127: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

service represents a financial risk for businesses and consequently will have a heavy impact on the ability and options for small and medium-sized enterprises (SME) as well as for start-ups in relation to insuring and financing their projects based on new technologies. The purpose of liability is, therefore, not only to safeguard important legally protected rights of individuals but also a factor which determines whether businesses, especially SMEs and start-ups, are able to raise capital, innovate and ultimately offer new products and services, as well as whether the customers are willing to use such products and services despite the potential risks and legal claims being brought against them.

service represents a financial risk for businesses and consequently will have a heavy impact on the ability and options for small and medium-sized enterprises (SME) as well as for start-ups in relation to insuring and financing their projects based on new technologies. The purpose of liability is, therefore, not only to safeguard legally protected rights of individuals but also a factor which determines whether businesses, especially SMEs and start-ups, are able to raise capital, innovate and ultimately offer new products and services, as well as whether the customers are willing to use such products and services despite the potential risks and legal claims being brought against them.

Or. en

Amendment 240Emmanuel Maurel, Manon AubryMotion for a resolutionAnnex I – part B – recital 19

Motion for a resolution Amendment

(19) Insurance can help to ensure that victims can receive effective compensation as well as to pool the risks of all insured persons. One of the factors on which insurance companies base their offer of insurance products and services is risk assessment based on access to sufficient historical claim data. A lack of access to, or an insufficient quantity of high quality data could be a reason why creating insurance products for new and emerging technologies is difficult at the beginning. However, greater access to and optimising the use of data generated by new technologies will enhance insurers’ ability to model emerging risk and to foster the development of more innovative cover .

deleted

Or. fr

AM\1205914EN.docx 127/134 PE652.460v01-00

EN

Page 128: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

Amendment 241Evelyne Gebhardt, Brando BenifeiMotion for a resolutionAnnex I – part B – recital 19

Motion for a resolution Amendment

(19) Insurance can help to ensure that victims can receive effective compensation as well as to pool the risks of all insured persons. One of the factors on which insurance companies base their offer of insurance products and services is risk assessment based on access to sufficient historical claim data. A lack of access to, or an insufficient quantity of high quality data could be a reason why creating insurance products for new and emerging technologies is difficult at the beginning. However, greater access to and optimising the use of data generated by new technologies will enhance insurers’ ability to model emerging risk and to foster the development of more innovative cover .

deleted

Or. en

Amendment 242Sergey LagodinskyMotion for a resolutionAnnex I – part B – recital 19

Motion for a resolution Amendment

(19) Insurance can help to ensure that victims can receive effective compensation as well as to pool the risks of all insured persons. One of the factors on which insurance companies base their offer of insurance products and services is risk assessment based on access to sufficient historical claim data. A lack of access to, or an insufficient quantity of high quality data could be a reason why creating insurance products for new and emerging technologies is difficult at the beginning. However, greater access to and optimising the use of data generated by new

(19) Insurance can help to ensure that victims can receive effective compensation as well as to pool the risks of all insured persons. One of the factors on which insurance companies base their offer of insurance products and services is risk assessment based on access to sufficient historical claim data. A lack of access to, or an insufficient quantity of high quality data could be a reason why creating insurance products for new and emerging technologies is difficult at the beginning. However, greater access to and optimising the use of data generated by new

PE652.460v01-00 128/134 AM\1205914EN.docx

EN

Page 129: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

technologies will enhance insurers’ ability to model emerging risk and to foster the development of more innovative cover .

technologies, coupled with an obligation to provide well-documented information, will enhance insurers’ ability to model emerging risk and to foster the development of more innovative cover.

Or. en

Amendment 243Evelyne Gebhardt, Brando BenifeiMotion for a resolutionAnnex I – part B – recital 20

Motion for a resolution Amendment

(20) Despite missing historical claim data, there are already insurance products that are developed area-by-area and cover-by-cover as technology develops. Many insurers specialise in certain market segments (e.g. SMEs) or in providing cover for certain product types (e.g. electrical goods), which means that there will usually be an insurance product available for the insured. If a new type of insurance is needed, the insurance market will develop and offer a fitting solution and thus, will close the insurance gap. In exceptional cases, in which the compensation significantly exceeds the maximum amounts set out in this Regulation, Member States should be encouraged to set up a special compensation fund for a limited period of time that addresses the specific needs of those cases.

deleted

Or. en

Amendment 244Sergey LagodinskyMotion for a resolutionAnnex I – part B – recital 20

Motion for a resolution Amendment

(20) Despite missing historical claim data, there are already insurance products that are developed area-by-area and cover-

(20) Despite missing historical claim data, there are already insurance products that are developed area-by-area and cover-

AM\1205914EN.docx 129/134 PE652.460v01-00

EN

Page 130: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

by-cover as technology develops. Many insurers specialise in certain market segments (e.g. SMEs) or in providing cover for certain product types (e.g. electrical goods), which means that there will usually be an insurance product available for the insured. If a new type of insurance is needed, the insurance market will develop and offer a fitting solution and thus, will close the insurance gap. In exceptional cases, in which the compensation significantly exceeds the maximum amounts set out in this Regulation, Member States should be encouraged to set up a special compensation fund for a limited period of time that addresses the specific needs of those cases.

by-cover as technology develops. Many insurers specialise in certain market segments (e.g. SMEs) or in providing cover for certain product types (e.g. electrical goods), which means that there will usually be an insurance product available for the insured. If a new type of insurance is needed, the insurance market will develop and offer a fitting solution and thus, will close the insurance gap. Member States should be encouraged to set up a special compensation fund to supplement the liability insurance cover in order to ensure that damages can be effectively compensated for in cases where no insurance cover exists. In order to ensure legal certainty and to fulfil the obligation to inform all potential affected persons, existence of the relevant insurance and fund shall be made publicly visible by an individual registration number appearing in a specific Union register, which would allow anyone interacting with the AI system to be informed about the ways of action when a harm or a damage occurs, the limits of liability attached to it, the names and the functions of the operator and all other relevant details.

Or. en

Amendment 245Marion WalsmannMotion for a resolutionAnnex I – part B – recital 20

Motion for a resolution Amendment

(20) Despite missing historical claim data, there are already insurance products that are developed area-by-area and cover-by-cover as technology develops. Many insurers specialise in certain market segments (e.g. SMEs) or in providing cover for certain product types (e.g. electrical goods), which means that there will usually be an insurance product available for the insured. If a new type of insurance is needed, the insurance market

(20) Despite missing historical claim data, there are already insurance products that are developed area-by-area and cover-by-cover as technology develops. Many insurers specialise in certain market segments (e.g. SMEs) or in providing cover for certain product types (e.g. electrical goods), which means that there will usually be an insurance product available for the insured. If a new type of insurance is needed, the insurance market

PE652.460v01-00 130/134 AM\1205914EN.docx

EN

Page 131: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

will develop and offer a fitting solution and thus, will close the insurance gap. In exceptional cases, in which the compensation significantly exceeds the maximum amounts set out in this Regulation, Member States should be encouraged to set up a special compensation fund for a limited period of time that addresses the specific needs of those cases.

will develop and offer a fitting solution and thus, will close the insurance gap.

Or. de

Amendment 246Emmanuel Maurel, Manon AubryMotion for a resolutionAnnex I – part B – recital 20

Motion for a resolution Amendment

(20) Despite missing historical claim data, there are already insurance products that are developed area-by-area and cover-by-cover as technology develops. Many insurers specialise in certain market segments (e.g. SMEs) or in providing cover for certain product types (e.g. electrical goods), which means that there will usually be an insurance product available for the insured. If a new type of insurance is needed, the insurance market will develop and offer a fitting solution and thus, will close the insurance gap. In exceptional cases, in which the compensation significantly exceeds the maximum amounts set out in this Regulation, Member States should be encouraged to set up a special compensation fund for a limited period of time that addresses the specific needs of those cases.

(20) Despite missing historical claim data, there are already insurance products that are developed area-by-area and cover-by-cover as technology develops. Many insurers specialise in certain market segments (e.g. SMEs) or in providing cover for certain product types (e.g. electrical goods), which means that there will usually be an insurance product available for the insured. If a new type of insurance is needed, the insurance market will develop and offer a fitting solution and thus, will close the insurance gap.

Or. fr

Amendment 247Liesje Schreinemacher, Adrián Vázquez Lázara, Javier Nart, Karen MelchiorMotion for a resolutionAnnex I – part B – recital 20

AM\1205914EN.docx 131/134 PE652.460v01-00

EN

Page 132: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

Motion for a resolution Amendment

(20) Despite missing historical claim data, there are already insurance products that are developed area-by-area and cover-by-cover as technology develops. Many insurers specialise in certain market segments (e.g. SMEs) or in providing cover for certain product types (e.g. electrical goods), which means that there will usually be an insurance product available for the insured. If a new type of insurance is needed, the insurance market will develop and offer a fitting solution and thus, will close the insurance gap. In exceptional cases, in which the compensation significantly exceeds the maximum amounts set out in this Regulation, Member States should be encouraged to set up a special compensation fund for a limited period of time that addresses the specific needs of those cases.

(20) Because historical claim data is missing, it should be investigated how and under which conditions liability is insurable. There are already insurance products that are developed area-by-area and cover-by-cover as technology develops. Many insurers specialise in certain market segments (e.g. SMEs) or in providing cover for certain product types (e.g. electrical goods), which means that there will usually be an insurance product available for the insured. However, a “one size fits all” solution is difficult to envisage and the insurance market will need time to adapt. The Commission should work closely with the insurance market to develop innovative insurance products that could close the insurance gap. In exceptional cases, in which the compensation significantly exceeds the maximum amounts set out in this Regulation, Member States should be encouraged to set up a special compensation fund for a limited period of time that addresses the specific needs of those cases.

Or. en

Amendment 248Gilles Lebreton, Gunnar Beck, Jean-Paul Garraud, Luisa RegimentiMotion for a resolutionAnnex I – part B – recital 20

Motion for a resolution Amendment

(20) Despite missing historical claim data, there are already insurance products that are developed area-by-area and cover-by-cover as technology develops. Many insurers specialise in certain market segments (e.g. SMEs) or in providing cover for certain product types (e.g. electrical goods), which means that there will usually be an insurance product available for the insured. If a new type of

(20) Despite missing historical claim data for reasons such as updating algorithms or anonymising data, there are already insurance products that are developed area-by-area and cover-by-cover as technology develops. Many insurers specialise in certain market segments (e.g. SMEs) or in providing cover for certain product types (e.g. electrical goods), which means that there will usually be an

PE652.460v01-00 132/134 AM\1205914EN.docx

EN

Page 133: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

insurance is needed, the insurance market will develop and offer a fitting solution and thus, will close the insurance gap. In exceptional cases, in which the compensation significantly exceeds the maximum amounts set out in this Regulation, Member States should be encouraged to set up a special compensation fund for a limited period of time that addresses the specific needs of those cases.

insurance product available for the insured. If a new type of insurance is needed, the insurance market will develop and offer a fitting solution and thus, will close the insurance gap. In exceptional cases, in which the compensation significantly exceeds the maximum amounts set out in this Regulation, Member States should be encouraged to set up a special compensation fund for a limited period of time that addresses the specific needs of those cases.

Or. fr

Amendment 249Daniel BudaMotion for a resolutionAnnex I – part B – recital 20

Motion for a resolution Amendment

(20) Despite missing historical claim data, there are already insurance products that are developed area-by-area and cover-by-cover as technology develops. Many insurers specialise in certain market segments (e.g. SMEs) or in providing cover for certain product types (e.g. electrical goods), which means that there will usually be an insurance product available for the insured. If a new type of insurance is needed, the insurance market will develop and offer a fitting solution and thus, will close the insurance gap. In exceptional cases, in which the compensation significantly exceeds the maximum amounts set out in this Regulation, Member States should be encouraged to set up a special compensation fund for a limited period of time that addresses the specific needs of those cases.

(20) Despite missing historical claim data, there are already insurance products that are developed area-by-area and cover-by-cover as technology develops. Many insurers specialise in certain market segments (e.g. SMEs) or in providing cover for certain product types (e.g. electrical goods), which means that there will usually be an insurance product available for the insured. If a new type of insurance is needed, the insurance market will develop and offer a fitting solution and thus, will close the insurance gap. In exceptional cases, such as an event incurring collective damages, in which the compensation significantly exceeds the maximum amounts set out in this Regulation, Member States should be encouraged to set up a special compensation fund for a limited period of time that addresses the specific needs of those cases.

Or. ro

Amendment 250

AM\1205914EN.docx 133/134 PE652.460v01-00

EN

Page 134: AM_Com_NonLegReport · Web viewhaving regard to Judgment of the Court (First Chamber) of 4 June 2009, Moteurs Leroy Somer v Dalkia France and Ace Europe (Case C-285/08), Or. {EN}en

Evelyne Gebhardt, Brando BenifeiMotion for a resolutionAnnex I – part B – recital 21

Motion for a resolution Amendment

(21) It is of utmost importance that any future changes to this text go hand in hand with a necessary review of the PLD. The introduction of a new liability regime for the deployer of AI-systems requires that the provisions of this Regulation and the review of the PLD should be closely coordinated in terms of substance as well as approach so that they together constitute a consistent liability framework for AI-systems, balancing the interests of producer, deployer and the affected person, as regards the liability risk. Adapting and streamlining the definitions of AI-system, deployer, producer, developer, defect, product and service throughout all pieces of legislation is therefore necessary.

(21) It is of utmost importance that any future changes to this text go hand in hand with a necessary review of the PLD. The introduction of a new liability regime for the frontend operator and backend operator of AI-systems requires that the provisions of this Regulation and the review of the PLD should be closely coordinated in terms of substance as well as approach so that they together constitute a consistent liability framework for AI-systems.

Or. en

PE652.460v01-00 134/134 AM\1205914EN.docx

EN