European Union Commission proposes an Artificial Intelligence Directive to complement the Artificial Intelligence Act

October 9, 2022 |

Artificial intelligence is now a key policy challenge across a range of disciplines; administration of justice, privacy, access to services, insurance and other forms of risk assessment, medicine, construction and the manufacturing.  It is transformative and will continue to be so. It also poses questions as to liability for products. The EU has proposed a legal framework on AI.

The reasons for that framework are described by the EU as follows:

The Commission is proposing the first-ever legal framework on AI, which addresses the risks of AI and positions Europe to play a leading role globally

The regulatory proposal aims to provide AI developers, deployers and users with clear requirements and obligations regarding specific uses of AI. At the same time, the proposal seeks to reduce administrative and financial burdens for business, in particular small and medium-sized enterprises (SMEs).

The proposal is part of a wider AI package, which also includes the updated Coordinated Plan on AI. Together, the Regulatory framework and Coordinated Plan will guarantee the safety and fundamental rights of people and businesses when it comes to AI. And, they will strengthen uptake, investment and innovation in AI across the EU.

Why do we need rules on AI?

The proposed AI regulation ensures that Europeans can trust what AI has to offer. While most AI systems pose limited to no risk and can contribute to solving many societal challenges, certain AI systems create risks that we must address to avoid undesirable outcomes.

For example, it is often not possible to find out why an AI system has made a decision or prediction and taken a particular action. So, it may become difficult to assess whether someone has been unfairly disadvantaged, such as in a hiring decision or in an application for a public benefit scheme.

Although existing legislation provides some protection, it is insufficient to address the specific challenges AI systems may bring.

The proposed rules will:

    • address risks specifically created by AI applications;
    • propose a list of high-risk applications;
    • set clear requirements for AI systems for high risk applications;
    • define specific obligations for AI users and providers of high risk applications;
    • propose a conformity assessment before the AI system is put into service or placed on the market;
    • propose enforcement after such an AI system is placed in the market;
    • propose a governance structure at European and national level.

A risk-based approach

pyramid showing the four levels of risk: Unacceptable risk; High-risk; limited risk, minimal or no risk

The Regulatory Framework defines 4 levels of risk in AI:

    • Unacceptable risk
    • High risk
    • Limited risk
    • Minimal or no risk

Unacceptable risk

All AI systems considered a clear threat to the safety, livelihoods and rights of people will be banned, from social scoring by governments to toys using voice assistance that encourages dangerous behaviour.

High risk

AI systems identified as high-risk include AI technology used in:

    • critical infrastructures (e.g. transport), that could put the life and health of citizens at risk;
    • educational or vocational training, that may determine the access to education and professional course of someone’s life (e.g. scoring of exams);
    • safety components of products (e.g. AI application in robot-assisted surgery);
    • employment, management of workers and access to self-employment (e.g. CV-sorting software for recruitment procedures);
    • essential private and public services (e.g. credit scoring denying citizens opportunity to obtain a loan);
    • law enforcement that may interfere with people’s fundamental rights (e.g. evaluation of the reliability of evidence);
    • migration, asylum and border control management (e.g. verification of authenticity of travel documents);
    • administration of justice and democratic processes (e.g. applying the law to a concrete set of facts).

High-risk AI systems will be subject to strict obligations before they can be put on the market:

    • adequate risk assessment and mitigation systems;
    • high quality of the datasets feeding the system to minimise risks and discriminatory outcomes;
    • logging of activity to ensure traceability of results;
    • detailed documentation providing all information necessary on the system and its purpose for authorities to assess its compliance;
    • clear and adequate information to the user;
    • appropriate human oversight measures to minimise risk;
    • high level of robustness, security and accuracy.

All remote biometric identification systems are considered high risk and subject to strict requirements. The use of remote biometric identification in publicly accessible spaces for law enforcement purposes is, in principle,  prohibited.

Narrow exceptions are strictly defined and regulated, such assuch as when necessary to search for a missing child, to prevent a specific and imminent terrorist threat or to detect, locate, identify or prosecute a perpetrator or suspect of a serious criminal offence.

Such use is subject to authorisation by a judicial or other independent body and to appropriate limits in time, geographic reach and the data bases searched.

Limited risk

Limited risk refers to AI systems with specific transparency obligations. When using AI systems such as chatbots, users should be aware that they are interacting with a machine so they can take an informed decision to continue or step back.

Minimal or no risk

The proposal allows the free use of minimal-risk AI. This includes applications such as AI-enabled video games or spam filters. The vast majority of AI systems currently used in the EU fall into this category.

step-by-step process for declaration of conformity
How does it all work in practice for providers of high risk AI systems?

Once an  AI system is on the market, authorities are in charge of market surveillance, users ensure human oversight and monitoring, and providers have a post-market monitoring system in place. Providers and users will also report serious incidents and malfunctioning.

Future-proof legislation

As AI is a fast evolving technology, the proposal has a future-proof approach, allowing rules to adapt to technological change. AI applications should remain trustworthy even after they have been placed on the market. This requires ongoing quality and risk management by providers.

And on that note the EU has proposed new liability rules on AI and products balance consumer protection with innovation.

The EU media release states:

Today, the Commission adopted two proposals to adapt liability rules to the digital age, circular economy and the impact of global value chains. Firstly, it proposes to modernise the existing rules on the strict liability of manufacturers for defective products (from smart technology to pharmaceuticals). The revised rules will give businesses legal certainty so they can invest in new and innovative products and will ensure that victims can get fair compensation when defective products, including digital and refurbished products, cause harm. Secondly, the Commission proposes for the first time a targeted harmonisation of national liability rules for AI, making it easier for victims of AI-related damage to get compensation. In line with the objectives of the AI White Paper and with the Commission’s 2021 AI Act proposal, setting out a framework for excellence and trust in AI – the new rules will ensure that victims benefit from the same standards of protection when harmed by AI products or services, as they would if harm was caused under any other circumstances.

Revised Product Liability Directive, fit for the green and digital transition and global value chains

The revised Directive modernises and reinforces the current well-established rules, based on the strict liability of manufacturers, for the compensation of personal injury, damage to property or data loss caused by unsafe products, from garden chairs to advanced machinery. It ensures fair and predictable rules for businesses and consumers alike by:

•  Modernising liability rules for circular economy business models: by ensuring that liability rules are clear and fair for companies that substantially modify products.

•  Modernising liability rules for products in the digital age: allowing compensation for damage when products like robots, drones or smart-home systems are made unsafe by software updates, AI or digital services that are needed to operate the product, as well as when manufacturers fail to address cybersecurity vulnerabilities.

•  Creating a more level playing field between EU and non-EU manufacturers: when consumers are injured by unsafe products imported from outside the EU, they will be able to turn to the importer or the manufacturer’s EU representative for compensation.

•  Putting consumers on an equal footing with manufacturers: by requiring manufacturers to disclose evidence, by introducing more flexibility to the time restrictions to introduce claims, and by alleviating the burden of proof for victims in complex cases, such as those involving pharmaceuticals or AI.

Easier access to redress for victims AI Liability Directive

The purpose of the AI Liability Directive is to lay down uniform rules for access to information and alleviation of the burden of proof in relation to damages caused by AI systems, establishing broader protection for victims (be it individuals or businesses), and fostering the AI sector by increasing guarantees. It will harmonise certain rules for claims outside of the scope of the Product Liability Directive, in cases in which damage is caused due to wrongful behaviour. This covers, for example, breaches of privacy, or damages caused by safety issues. The new rules will, for instance, make it easier to obtain compensation if someone has been discriminated in a recruitment process involving AI technology.

The Directive simplifies the legal process for victims when it comes to proving that someone’s fault led to damage, by introducing two main features: first, in circumstances where a relevant fault has been established and a causal link to the AI performance seems reasonably likely, the so called ‘presumption of causality’ will address the difficulties experienced by victims in having to explain in detail how harm was caused by a specific fault or omission, which can be particularly hard when trying to understand and navigate complex AI systems. Second, victims will have more tools to seek legal reparation, by introducing a right of access to evidence from companies and suppliers, in cases in which high-risk AI is involved.

The new rules strike a balance between protecting consumers and fostering innovation, removing additional barriers for victims to access compensation, while laying down guarantees for the AI sector by introducing, per instance, the right to fight a liability claim based on a presumption of causality.

Members of the College said:

Vice-President for Values and Transparency, V?ra Jourová said: “We want the AI technologies to thrive in the EU. For this to happen, people need to trust digital innovations. With today’s proposal on AI civil liability we give customers tools for remedies in case of damage caused by AI so that they have the same level of protection as with traditional technologies and we ensure legal certainty for our internal market.”

Commissioner for Internal Market, Thierry Breton, said: “The Product Liability Directive has been a cornerstone of the internal market for four decades. Today’s proposal will make it fit to respond to the challenges of the decades to come. The new rules will reflect global value chains, foster innovation and consumer trust, and provide stronger legal certainty for businesses involved in the green and digital transition.”

Commissioner for Justice, Didier Reynders, said: “While considering the huge potential of new technologies, we must always ensure the safety of consumers. Proper standards of protection for EU citizens are the basis for consumer trust and therefore successful innovation. New technologies like drones or delivery services operated by AI can only work when consumers feel safe and protected. Today, we propose modern liability rules that will do just that. We make our legal framework fit for the realities of the digital transformation.”

Next steps

The Commission’s proposal will now need to be adopted by the European Parliament and the Council.

It is proposed that five years after the entry into force of the AI Liability Directive, the Commission will assess the need for no-fault liability rules for AI-related claims if necessary.

Background

The current EU rules on product liability, based on the strict liability of manufacturers, are almost 40 years old. Modern rules on liability are important for the green and digital transformation, specifically to adapt to new technologies, like Artificial Intelligence. This is about providing legal certainty for businesses and ensuring consumers are well protected in case something goes wrong.

In her Political Guidelines, President von der Leyen laid out a coordinated European approach on Artificial Intelligence. The Commission has undertaken to promote the uptake of AI and to holistically address the risks associated with its uses and potential damages.

In its White Paper on AI of 19 February 2020, the Commission undertook to promote the uptake of AI and to address the risks associated with some of its uses by fostering excellence and trust. In the Report on AI Liability accompanying the White Paper, the Commission identified the specific challenges posed by AI to existing liability rules.

The Commission adopted its proposal for the AI Act, which lays down horizontal rules on artificial intelligence, focusing on the prevention of damage, in April 2021. The AI Act is a flagship initiative for ensuring safety and trustworthiness of high-risk AI systems developed and used in the EU. It will guarantee the safety and fundamental rights of people and businesses, while strengthening AI uptake, investment and innovation. Today’s AI liability package complements the AI Act by facilitating fault-based civil liability claims for damages, laying down a new standard of trust in reparation.

The AI Liability Directive adapts private law to the new challenges brought by AI. Together with the revision of the Product Liability Directive, these initiatives complement the Commission’s effort to make liability rules fit for the green and digital transition.

Leave a Reply





Verified by MonsterInsights