Welcome to the EU AI Act Newsletter, a brief biweekly newsletter by the Future of Life Institute providing you with up-to-date developments and analyses of the proposed EU artificial intelligence law.
Legislative Process
The French Presidency of the European Council shared a new compromise text on the AI Act, focused on Articles 30-39 and 59-62. They focused the last amendments on the actors playing a role in the conformity assessment of AI systems: notifying authorities, notified bodies and the European Commission. On the high-risk AI systems database, wider accessibility to the public has been proposed; however, malfunctioning is now excluded from reporting obligations. The Presidency has also provided amendments on Articles 70-85, but to our knowledge the text is not yet publicly available. EURACTIV reported that the deadline to apply the regulation has been extended from two to three years and the Commission will have to seek approval from Member States for its continued use of delegated acts after five years. Fines for non-compliance with the regulation have been suggested considering the size of businesses.
MEPs in the Committee on Transport and Tourism (TRAN) proposed their amendments to the AI Act proposal. Some of the suggestions are to narrow the AI definition to leave out of scope “normal programming”, remove the prohibition on the use of real-time remote biometric identification systems in public for law enforcement, and add AI systems that have a lot of customers to the list of high-risk systems. Furthermore, they recommend much stronger protection of the environment throughout the text and add a new article about general purpose AI systems that allocates responsibility to those who use or put such a systems into service for a high-risk intended purpose. The draft opinion by MEP Josianne Cutajar was published in late March and we briefly summarised it in newsletter #5.
Analyses
The Future of Life Institute (us!) published a policy paper on general purpose AI and the AI Act. The piece states that general purpose AI systems have a wide range of possible uses, both intended and unintended by the developers. It continues arguing that with the current AI Act draft, the burden of making these systems compliant with the regulation would fall entirely on the users of the AI systems instead of the developers. This could limit the uptake of general purpose AI systems and cause AI innovation to further concentrate with the developers. There are multiple recommendations in the paper, including a definition for a general purpose AI system and obligations for providers of general purpose AI systems.
Irish Council for Civil Liberties commented on the recent report of the two leading European Parliament committees. In particular, they praise a proposal to change the definition of AI so that it allows for the inclusion of objectives set by the AI systems themselves. They also welcome the additions of the right to lodge a complaint and the right to judicial remedy, and agree with the decision to ask providers to provide examples of scenarios for which the AI system should not be used. Finally, they encourage further action along the following streams: include general-purpose AI in the scope of the AI definition, assign more powers to national Market Surveillance Authorities, and strengthen the EU database further by requiring users to register and including near-misses in reporting obligations.
European Digital Rights collected a long list of amendments put forward by various members of civil society. The list includes amendments on future-proofing the AI Act as well as prohibiting biometric recognition, predictive policing, AI used in migration and emotion recognition. Furthermore, there are suggestions to introduce obligations on users of high-risk AI, ensure transparency to the public of the use of AI and provide the right to seek information when affected by AI-assisted decisions. There are many more recommendations.
The National Law Review analysed the AI Act focusing on the implications of the EU regulation for the UK, specifically from the perspective of human resources (HR). The case is made that the Act will have a significant impact on UK businesses as any use of AI in HR is could be considered high-risk. They list some concrete examples of HR practices that could be covered like CV scanners, reasoning tests, work allocation software, performance monitoring, etc. The review concludes with some recommendations for UK businesses to proactively seek information on the legislative process and to get ready for significant changes to their business.
Swedish MEP Jorgen Warborn (EPP/Moderate Party) stated in an op-ed that he will tackle what in his opinion are key issues with the current AI Act proposal. The main problem for him is an unnecessary burden on EU businesses that overlaps with other regulations and hampers innovation. Assuming that the regulatory process cannot be reverted, he deems the risk-based approach as fit-for-purpose. However, he calls for a clear distinction between AI that is high risk and one that is not. Warborn claims he will work to make the regulation "as unbureaucratic as possible" and will push for an implementation that ensures predictability across the whole EU.
Pinsent Masons, a law firm, published an analysis that sheds light on the human oversight obligations of the AI Act draft. They say that the recent IMCO-LIBE report outlines that businesses using high-risk AI systems must take responsibility for ensuring that the people overseeing those systems are trained and have the resources needed for supervision. The analysis also highlights that the report would make it a requirement of those who provide high-risk AI systems to make sure that the people who are in charge of these systems are aware of the possible dangers of letting machines make decisions on their own.