Welcome to the EU AI Act Newsletter, a brief biweekly newsletter by the Future of Life Institute providing you with up-to-date developments and analyses of the proposed EU artificial intelligence law.
Legislative Process
According to EURACTIV, the European Parliament’s co-rapporteurs, Brando Benifei and Dragoș Tudorache, have circulated a new compromise amendments to the AI Act for classifying high-risk AI systems. These systems are classified as high-risk if they are, or are part of, the safety component for products covered by EU harmonisation legislation, or if they fall within the categories and use cases listed in Annex III. However, the MEPs added an additional requirement that the high-risk list only refers to systems with an intended purpose, and general-purpose AI will be treated separately pending further discussions. EURACTIV's summary describes that if a system is used in the categories listed in Annex III, it will only be considered high-risk if it receives personal or biometric data as inputs or is intended to make or assist decisions affecting individuals’ health, safety, or fundamental rights. Previously, the European Commission could only add use cases under existing high-risk areas but not change or remove them, but now the conditions for assessing new risks have been significantly revised. During this assessment, the Commission should consult with the AI Office, which would have a central role in the governance architecture. The procedure for these cases would be detailed in a delegated act.
Analyses
appliedAI published the findings of a survey of 113 EU-based AI startups and 15 venture capital firms (VCs), exploring the impact of the AI Act on European startups. The main findings of the report are as follows. 73% of the surveyed VCs expect that the AI Act will reduce the competitiveness of European startups in AI. 33% of the startups surveyed believe that their AI systems would be classified as high-risk, compared to the 5-15% assessed by the European Commission. 45% of startups believe that their AI systems could be classified as general purpose AI. 50% of the AI startups believe the AI Act will slow down AI innovation in Europe, and 16% consider stopping the development of AI or intend to relocate outside the EU. The data governance, risk management, and accuracy, robustness and cybersecurity requirements are seen as the most difficult to comply, with over 50% of respondents considering them somewhat difficult or very difficult. Technical documentation, human oversight, and record keeping are seen as less difficult.
The Responsible Artificial Intelligence Institute published an explainer of the AI Act, including its importance, definitions, requirements, main points of contention, and how it fits into the broader AI regulatory context. The explainer emphasises that the AI Act is important for businesses because it affects their compliance obligations and protects their bottom line; it is important for the public because it sets the global bar for AI regulation and holds businesses accountable for their use of AI; it is important for regulators because it will set a precedent for future AI regulatory approaches; and it is important for responsible AI because it establishes legal requirements and enforcement mechanisms for AI systems.
The Artificialintelligenceact.eu website (maintained by us, the Future of Life Institute) published two summaries: 1) the AI Act's institutional context, and 2) the AI Act's standard setting process. The first summary highlights the key actors and the steps of the legislative procedure, and gives important context to the many well-established tools employed by the AI Act for enforcement and maintenance after publication. For example, it explains the relationships between conformity assessments, notified authorities, market surveillance authorities, and harmonised standards. The second piece summarises in more detail how harmonised standards fit into the AI Act, who the key actors are in the development of such standards, what the process looks like, and what the expected timeline for the development is. For example, it is worth emphasising that harmonised standards play an important role in EU legislation by turning high-level requirements into concrete technical requirements.
Open Future wrote a blog post asking how the AI Act will handle open source AI systems. Advocates of open source and representatives of companies contributing to the development of open source AI models are advocating for provisions that would remove open source AI systems from the scope and/or exclude general-purpose AI (GPAI) systems from the Act. However, a coalition of civil society organisations has argued that for the AI Act to provide meaningful protections against high-risk uses of AI systems, GPAI must be included in the scope of the Act and the obligations imposed on them must remain meaningful. The author recommends transparency requirements for these systems, such as including standardised information about the model in the form of model cards and data sheets, access to the data used to train and fine-tune the model for the purpose of auditing, and licensing conditions that allow for the audit and explanation of the model's behaviours. The author argues that this way, the developers can create the conditions for downstream users to comply with the obligations of the AI Act for high-risk uses of the model.
Holistic AI summarised the final compromise text of the Council of the EU for the AI Act. In defining AI, the final text includes a narrower definition of AI as member states were concerned that a broader definition would also include software generally. For the scope of the Act, three key changes were made relating to inclusion, exclusion and expansion. In the area of governance, four key changes were made relating to the AI Board, regulatory sandboxes and penalties. One key change was made to the prohibitions section, with the prohibition of social scoring being extended to private actors. For designating high risk-systems, three key changes were made, including vaguer parameters classifying high-risk systems; the ability of the Commission to add and remove (under certain conditions) high-risk use cases to/from Annex III upon implementation; and increased transparency requirements for high-risk systems, including public-body users. The final text also clarifies and adjusts the requirements for high-risk systems in terms of compliance feasibility.