Welcome to the EU AI Act Newsletter, a brief biweekly newsletter by the Future of Life Institute providing you with up-to-date developments and analyses of the proposed EU artificial intelligence law.
Legislative Process
According to EURACTIV's Luca Bertuzzi, the European Parliament reached a provisional political deal on the AI Act on 27 April. The text may still be subject to minor adjustments at the technical level ahead of a key committee vote scheduled on 11 May, but it is expected to go to a plenary vote in mid-June. One key change in the prohibited practices section is that the Act now bans "purposeful" manipulation despite concerns that intentionality might be difficult to prove. AI used to manage critical infrastructure like energy grids or water management systems is categorised as high risk if these applications entail severe environmental risks. The recommender systems of ‘very large online platforms’, as defined under the Digital Services Act, are also deemed high-risk. Extra safeguards were included for the process whereby providers of high-risk AI models can process sensitive data such as sexual orientation or religious beliefs to detect negative biases. Finally, high-risk AI systems will have to keep records of their environmental footprint, and foundation models will have to comply with European environmental standards.
Martin Coulter and Supantha Mukherjee from Reuters reported that MEPs have updated their rules to include generative AI, having initially omitted it. The new proposals would require companies with generative AI systems, such as OpenAI, to disclose any copyrighted material used to train their models. The MEPs have agreed on the need for laws to target the use of generative AI after an explosion of interest in the technology provoked awe and anxiety. In April, a dozen MEPs involved in drafting the legislation signed an open letter agreeing with some parts of the Future of Life Institute's letter, and urged world leaders to hold a summit to find ways to control the development of advanced AI.
Bradford Betz from Fox News summarised a recent letter in which a group of 12 MEPs called for a summit to address the rapid development of advanced AI systems, which they say are evolving faster than anticipated. The letter was written in response to the Future of Life Institute's recent open letter, in which leading technology figures called for a six-month pause on AI systems more powerful than GPT-4. The EU lawmakers agreed with its core message, that the rapid evolution of powerful AI has created a need for significant political action. The MEPs urged both democratic and non-democratic countries to exercise restraint in the pursuit of powerful AI. A key committee vote on the AI Act was postponed following discussions about the impact of ChatGPT, and according to Dragoș Tudorache, the Act will now likely not be enacted until next year. The letter itself can be read here.
Analyses
Mehwish Ansari, Head of Digital at ARTICLE 19, and Vidushi Marda, Senior Programme Officer, wrote an op-ed in EUobserver noting that because the proposed AI Act remains vague on how to implement the essential requirements for high-risk AI systems, the responsibility for determining the specifics of these requirements will be delegated to the European Committee for Standardisation (CEN) and the European Committee for Electrotechnical Standardisation (CENELEC). Ansari and Mardais, however, argue that these bodies lack representation from human rights experts or civil society organisations, raising concerns about their ability to protect fundamental rights. They state that technical standards related to data governance, transparency, security, and human oversight will have a direct impact on people's privacy rights and other fundamental rights, and that the lack of democratic scrutiny or legislative interpretation of these standards may weaken the implementation of the AI Act. The authors suggest that a better approach would be to establish a fundamental rights impact assessment framework and require all high-risk AI systems to undergo evaluation as a condition of being placed on the market.
Alex Engler, Fellow in Governance Studies at The Brookings Institution, wrote a report making the case that the EU and the US are jointly pivotal to the future of global AI governance, and need to align their approaches to AI risk management to facilitate bilateral trade, improve regulatory oversight, and enable broader transatlantic cooperation. Engler states that the US approach to AI risk management is distributed across federal agencies, while the EU approach is characterised by more comprehensive legislation tailored to specific digital environments. While both approaches share a conceptual alignment on a risk-based approach, trustworthy AI principles, and international standards, they differ in specifics, particularly in relation to socioeconomic processes and online platforms. Engler makes several policy recommendations: 1) the US should execute federal agency AI regulatory plans with an eye toward EU-US cooperation; 2) the EU should create more flexibility in the sectoral implementation of the EU AI Act; 3) the US should implement a legal framework for online platform governance; and 4) both EU and US should deepen knowledge sharing and develop an AI assurance ecosystem.
Norberto de Andrade, Director of AI Policy at Meta, Laura Galindo, AI Policy Manager at Meta, and Antonella Zarra, AI Policy Program Manager at Meta, published two new reports on the AI Act as part of their Open Loop policy prototyping experiment. The first report explores the taxonomy of AI actors in the AI Act and proposes an alternative. The report concludes that the existing taxonomy does not accurately reflect the actors in the AI ecosystem, particularly the roles of subjects, third-party service providers, and data providers seem to be missing. The proposed alternative includes AI developer, AI provider, AI service provider, data provider, user, end user, subject, importer, and distributor. The second report explores the AI regulatory sandbox provision described in Article 53 of the AI Act. It concludes that the article does not provide the necessary conditions for successful regulatory sandboxes, but the implementing acts described in Article 53(6) could create these conditions. To make AI regulatory sandboxes attractive for providers, incentives such as direct interaction with the regulator, access to knowledge and resources, and a collaborative learning environment are needed. However, member states should avoid creating conditions that are too favourable for participants, as this could distort the level playing field for AI development in Europe.