Welcome to the EU AI Act Newsletter, a brief biweekly newsletter by the Future of Life Institute providing you with up-to-date developments and analyses of the proposed EU artificial intelligence law.
Legislative Process
According to EURACTIV, the latest amendments by European Parliament rapporteurs Dragoş Tudorache and Brando Benifei focus on the metaverse, risk management, data governance and documentation for high-risk systems. It is reported that the risk management system would have to consider health, fundamental rights, impact on specific groups, the environment and disinformation. In addition, the technical documentation requirements are extended to the user interface, how the AI system works, expected inputs and outputs, cybersecurity measures, and carbon footprint. Some topics, such as principles applicable to all AI systems and whether there will be an AI agency or a stronger AI board, have been put on hold. Similarly, questions about the foreseeable uses and misuses of AI system will be addressed together with general purpose AI at a later date.
The European Commission has proposed a targeted harmonisation of national liability rules for AI, so that victims of AI-related damage can receive compensation. The AI Liability Directive aims to complement the AI Act by facilitating civil liability claims for damages. More concretely, it lays down rules for access to information and alleviation of the burden of proof in relation to damages caused by AI systems. It tries to achieve these objectives by introducing a right of access to evidence from companies and suppliers in cases of high-risk AI involvement, and by presuming causality when a relevant fault of an AI system has been established with a reasonably likely causal link to the AI performance.
The Czech presidency has shared the latest set of compromise amendments from the Council. One of the main proposals is for the European Commission to adapt the obligations of the AI Act to general purpose AI by adopting an implementing act based on an impact assessment, technical feasibility and technological developments. Other areas covered in this text are, for example, notifying authorities and notified bodies; standards, conformity assessment, certificates, and registration; transparency obligations; and measures in support of innovation.
Analyses
MIT Technology Review published an article covering the AI Liability Directive, which will add teeth to the AI Act by giving people and companies the right to sue for harm caused by AI. As an example, job seekers could get a court to force an AI company to share information about the AI system to identify the causes of harm. Some tech lobbying groups, such as CCIA, worry that the law could negatively affect software development because developers risk becoming liable for the impact of software on the mental health of users in addition to software bugs. Other tech lobby groups, such as BSA, however, like that there will be a common way to seek compensation across the EU when an AI system causes harm. The main EU consumer group BEUC criticizes the proposal for putting the responsibility on consumers to prove that an AI system harmed them or that an AI developer was negligent. Finally, some civil society groups like the Future of Life Institute (where I work), state that the directive does not take into account indirect harms from AI such as a social platform inadvertently boosting polarising content.
The Centre for European Policy Studies published a report about the AI value chain in the AI Act, including its relation to general-purpose AI (GPAI) models. The authors provide six policy recommendations: 1) discourage API access for GPAI use in high-risk AI systems, 2) envisage soft commitments for GPAI model providers, 3) discourage value chain types in which a vendor builds software for a specific intended purpose but does not provide the data itself or pre-trained AI models, 4) exempt the placing of an AI system online as free and open-source software, 5) clarify ambiguities concerning the identity and obligations of the providers of high-risk AI systems, and 6) incorporate transparency responsibilities and human oversight requirements for users of high-risk AI systems.
Tech Monitor summarised a statement by a group of industry associations led by BSA – an industry group with members such as Microsoft, IBM and other companies – calling for the exemption of general purpose AI (GPAI) from regulation in the AI Act. The article claims that including GPAI in the AI Act could stifle innovation and hit the open source community. The statement argues that regulating GPAI would divert from the risk-based approach set by the Commission and lead to an unbalanced allocation of responsibilities for the AI value chain. Some industry representatives, however, advocate for the inclusion of GPAI in the legislation due to potential unintended harms it could cause. In addition, it is argued that these requirements would not be a huge burden on developers who are already used to including legal documentation for open source software.
The Parliament Magazine published an overview of how the AI Act tries to protect the rights of Europeans. According to the article, most MEPs agree on better protection against surveillance and protection of privacy, but some political groups advocate for remote biometric identification systems and emotional recognition systems to be banned. The co-rapporteurs in the European Parliament have introduced an impact assessment to assess the risks AI systems pose to fundamental rights and how to mitigate those risks. In addition, the article mentions that the list of high-risk applications has been extended to include AI systems that interact with children or are able to influence democratic processes. Finally, some political groups have proposed to ban systems that predict future criminal activity.