Welcome to the EU AI Act Newsletter, a brief biweekly newsletter by the Future of Life Institute providing you with up-to-date developments and analyses of the proposed EU artificial intelligence law.
Legislative Process
Brando Benifei and Dragoș Tudorache, Members of the European Parliament leading on the AI Act in the IMCO and LIBE committees, published their draft report. According to EURACTIV, noteworthy aspects of the report include proposing that the European Commission be given additional powers, that predictive policing be added to the list of prohibited practices, and that no amendment has been included on general-purpose AI systems which indicates that how to deal with general-purpose AI systems is deferred to the amendment stage. Furthermore, it recommends that no exemptions for high-risk systems be put in place for public interest reasons, that a new paragraph on protecting IP be added, and mandates that datasets be kept up-to-date to the best extent possible. The deadline for amendments on this report is 18 May.
MEPs in the Committee on Culture and Education (CULT) proposed their own amendments to the AI Act proposal as part of the CULT committee legislative process, following MEP Marcel Kolaja's draft opinion in this committee. Some of the suggestions are to replace subliminal techniques in Article 5 with psychological techniques, to add economic harm within the scope of Article 5, and to consider risks to the environment, democracy and rule of law in the assessment of high-risk systems. Furthermore, they emphasise that the ban on the use of real-time remote identification systems should apply across the board, not only in connection with law enforcement purposes. The draft opinion by MEP Kolaja was published in February and we briefly summarised it in newsletter #2.
Analyses
Mozilla wrote a response to the IMCO and LIBE draft report on the AI Act. They state that the report would further strengthen the AI Act but they still have concerns. They argue that the AI Act can be improved in three ways: 1) ensuring accountability for high-risk uses of AI along the supply chain, 2) creating systemic transparency as a mechanism for enhanced oversight, and 3) giving individuals and communities a stronger voice and means of contestation. More concretely, they recommend dividing compliance obligations between developers and deployers for multi-purpose AI systems in a way that protects people from harm; neither should have the sole responsibility. Furthermore, they suggest enhancing the scope of the public database for high-risk systems and clarifying that organisations advocating on someone's behalf should also be able to file complaints.
Computer Weekly reported that the joint-report by MEPs Tudorache and Benifei sets out a limited ban on predictive policing systems alongside other amendments to improve redress mechanisms and extend the list of AI systems deemed high-risk. They said that predictive policing violates human dignity and the presumption of innocence, and it holds a particular risk of discrimination. The ban would, however, only extend to systems that “predict the probability of a natural person to offend or reoffend”. Members of civil society welcome this amendment but wish to see it go further, since it does not currently apply to predictive policing systems that profile neighbourhoods for the risk of crime.
Marcel Kolaja (CZ, Greens/EFA), rapporteur for the opinion of the CULT committee on the AI Act, published an op-ed in The Parliament Magazine. He argues that one of the most problematic aspects of the Act is the permitted use of remote biometric identification systems in public space. He also advocates for a total ban on emotion recognition systems and recommends that technologies used for students’ personalised education should be included in the high-risk category. In addition to education, the committee focuses on the media sector, where AI systems can be misused to spread disinformation. This is important because the functioning of democracy and society may be threatened.
Eva Maydell (BG, EPP), rapporteur on the AI Act of the Industry, Research and Energy Committee (ITRE), also wrote an op-ed in The Parliament Magazine. She claims that this regulation, along with the changes proposed by ITRE, will enhance the spread of AI while ensuring its safety. She agrees that prohibiting technology seldom works as anticipated and argues that instead companies require clearer guidelines, simpler tools and more efficient resources to cope with regulation and to innovate. Her priorities are to enhance measure supporting innovation, provide a more concise definition of an AI systems, set high but realistic standards for cybersecurity and data, and future-proof the AI Act.
ALLAI, an organisation focused on responsible AI, published their third in-depth analysis of the AI Act. This analysis focused on high-risk AI classification, which includes Articles 6 and 7 as well as Annexes II and III. One of the main points they highlight is that classifying AI as high-risk based on a limited set of criteria sits in tension with the fundamental rights doctrine. They also emphasise that since adding new high-risk AI systems is only allowed in pre-determined domains, the Act is not particularly future-proof in its current form. The reasonably foreseeable use of an AI system should be taken into account in addition to the intended purpose, they contend. We briefly summarised ALLAI's previous two reports in newsletter #4.