European Parliament PR of 20 april 2022

European Parliament adopts its position on Artificial Intelligence Act

The European Parliament on 20 april 2022 adopted its position on the proposal for a regulation of the European Parliament and of the Council on harmonised rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union Legislative Acts. It now calls on the Commission to refer the matter to Parliament again if it replaces, substantially amends or intends to substantially amend its proposal. The European Parliament instructs its President to forward its position to the Council, the Commission and the national parliaments.

The co-Rapporteurs share the view that artificial intelligence developed and used in Europe should be human-centric and trustworthy and should respect fundamental rights and Union values enshrined in the Treaties. At the same time, regulation should not hinder but, rather, it should support innovation and the business environment. Both of these objectives are best achieved by increasing legal certainty and clarity throughout the Regulation proposal, in order to support the private sector and public authorities to comply with the new obligations. The draft Report contains the points on which the co-Rapporteurs could easily agree, and it touches upon all the main elements of the draft Regulation.

In terms of scope, the co-rapporteurs agree with the risk-based approach proposed by the Commission. That is, the obligations set out in this Regulation only apply to forbidden practices, to high-risk AI systems, and to certain AI systems that require transparency. As such, no AI system should be excluded ex-ante, either from the definition of “artificial intelligence” or by carving out exceptions for particular types of AI systems, including general purpose AI. Where, for objective reasons, providers are unable to fulfil the obligations under this Regulation, they should be able to enter into agreements with the users to share the responsibilities. A key element of the draft Report is also the alignment of the text with the GDPR, as the two regulations should work complementary to one another for the development and uptake of AI in Europe.

In terms of forbidden practices, the co-rapporteurs have agreed to add practices that amount to “predictive policing” to the list, as they share the view that liberal societies cannot use technology in breach of the key principle of presumption of innocence.

As regards high-risk AI systems, which are the main focus of the Regulation, the co-rapporteurs propose adding a number of use cases to the list of high-risk AI systems. As children are a particularly vulnerable category, AI systems used to influence or shape their development should be considered high risk. AI systems used by candidates or parties to influence votes in local, national, or European elections, and AI systems used to count such votes, have the potential, by influencing a large number of citizens of the Union, to impact the very functioning of our democracy. They should therefore be considered high risk. AI systems used for the triage of patients in the healthcare sector, and AI systems used to determine eligibility for health and life insurance are also considered high-risk. Because of their potential for deception, two types of AI systems should be subject to both transparency requirements and the conformity requirements of high-risk AI systems: deepfakes impersonating real persons and editorial content written by AI (“AI authors”). The co-rapporteurs stress that high-risk AI systems are not prohibited, nor are they to be seen as undesirable. To the contrary, complying with the conformity requirements set out in this Regulation makes such systems more trustworthy and more likely to be successful on the European market.

The draft Report considers more closely the chain of responsibility and tries to clarify and rebalance some provisions. Namely, on data governance, the consistency with GDPR has been strengthened and the possible additional legal basis for processing personal data has been removed. In addition, it has been clarified that “error-free” datasets should be an overall objective to reach to the best extent possible, rather than a precise requirement. The cases of datasets being in the possession of users, while the provider only build the overall architecture of the system, have also been clarified. Most of these clarifications take into account concerns expressed by industry, as the AI value chain is not always linear and responsibilities need to be clearly delineated between different actors in the value chain.

Users of high-risk AI systems also play a role in protecting the health, safety, and fundamental rights of EU citizens and EU values, from ensuring that they appoint competent persons responsible for the human oversight of high-risk AI systems to playing a more active role in reporting cases of incidents or malfunctioning of an AI system, as they are sometimes best placed to spot such incidents or malfunctions. Users who are public authorities are subject to increased transparency expectations in democratic societies. As such, public authorities, Union institutions, agencies, or bodies should register the use of high-risk AI systems in the EU-wide database. This allows for increased democratic oversight, public scrutiny, and accountability, alongside more transparency towards the public on the use of AI systems in sensitive areas impacting upon people’s lives. Additionally, users of high-risk AI systems referred to in Annex III that make decisions or that assist in making decisions related to natural persons should inform the natural persons that they are subject to the use of the high-risk AI system.

Several provisions of the draft Report focus on governance and enforcement, as the co-rapporteurs are convinced these are key elements to allow the AI Act to be implemented effectively and consistently throughout the Union and therefore help create a true Single Market for AI.

To this end, the tasks of the AI Board have been increased. The AI Board should play a more significant role in the uniform application of the Regulation and in providing advice and recommendations to the Commission, for example on the need to amend Annex III, and to national supervisory authorities. The Board should act as a forum for exchange among national supervisory authorities and, at the same time, it should constitute a place for arbitration of disputes involving two or more Member States’ authorities, in order to avoid the fragmentation of the Single Market through differentiated enforcement. Furthermore, given its increased role and responsibilities, the Board should organize, at least twice a year, consultations with industry, start-ups and SMEs, civil society, and academia, in order to carry out its tasks in collaboration with all relevant stakeholders.

At the national level, the co-Rapporteurs have stressed the need for close cooperation between the market surveillance authorities and the data protection authorities, as the enforcement of the Regulation on AI will require both sets of competences, which, moreover, should be regularly updated. In cases of infringements on fundamental rights, the relevant fundamental rights bodies should also be closely involved.

In order to tackle possible issues impacting individuals in several Member States, the co-rapporteurs propose a new enforcement mechanism by the Commission, to be triggered in cases amounting to widespread infringements (three or more Member States), including in the case of inaction on an infringement impacting at least three Member States. This mechanism, based on the model of the Digital Services Act but adapted to the different nature of the AI legislation, aims to address some of the enforcement problems that have been observed in other governance setups, to contribute to the uniform implementation of this regulation, and to strengthen the digital single market. According to the mechanism, in such cases of widespread infringements, the Commission should have the powers of a market surveillance authority, on the model of the Market Surveillance and compliance Regulation.

The co-rapporteurs believe it is important to strengthen the involvement of stakeholders and civil society organizations in several key provisions of the Regulation, such as the updates to the list of high-risk AI systems, the standardization process, as well as the activities of the Board and the sandboxes. Furthermore, in order to ensure that individuals are properly empowered when the use of an AI system infringes on their rights, but also in order to contribute to building trust in AI systems and their widespread use, the co-rapporteurs have added a dedicated chapter on remedies for both natural and legal persons.

The co-rapporteurs want to emphasize, together, that the goal of the AI Act is to ensure both the protection of health, safety, fundamental rights, and Union values and, at the same time, the uptake of AI throughout the Union, a more integrated digital single market, and a legislative environment suited for entrepreneurship and innovation. This spirit has guided and will continue to guide their work on this Regulation.
 



Verlag Dr. Otto Schmidt vom 23.04.2022 17:17
Quelle: European Parliament PR of 20 april 2022

zurück zur vorherigen Seite


Test subscription

 

Computer Law Review International

Subscribe now to CRi and secure the advantages of legal comparison for your practice: state-of-the-art approaches and solutions from other jurisdictions – every second month, six times a year.

Print (ordering option in German)

eJournal as PDF at De Gruyter