Welcome to the EU AI Act Newsletter, a brief biweekly newsletter by the Future of Life Institute providing you with up-to-date developments and analyses of the proposed EU artificial intelligence law.
Legislative Process
This week, according to Euractiv's Luca Bertuzzi, the Spanish presidency shared a document in preparation for negotiations with the European Parliament and Commission on the AI Act, which are held on 24 October. A significant focus of the document is the regulation of AI models without specific purposes – foundation models like ChatGPT. The Parliament had subjected such models to a tiered approach, a concept that is now embraced by the Council. In their document, the Spanish presidency introduce a possible new definition for foundation models and propose a new category, 'very capable foundation models'. They also outline a third category for general-purpose AI systems used at scale. One key bone of contention is the use of real-time biometric identification systems in law enforcement: MEPs want a ban; EU governments prefer exceptions. The proposed compromise narrows the exceptions, limiting their use for specific cases like preventing terrorist attacks and human trafficking. Differences also exist in emotion recognition, where the Parliament leans towards a broad ban, while the Council suggests specific exemptions, particularly in group screenings and for safety reasons.
Euractiv reported that the Spanish presidency has also shared three discussion papers to collect EU member states' input on key aspects of the Act. These papers address fundamental rights, workplace decision-making, and sustainability obligations. On fundamental rights, MEPs propose an obligation for high-risk AI model users to conduct a fundamental rights impact assessment. The presidency offers three options, one close to the Parliament's version, one that only requires users to submit information to the market surveillance authorities via a template, and the last one that includes it alongside other obligations for high-risk AI systems rather than as a stand-alone article. In workplace AI use, MEPs want national measures to protect workers’ rights and require consultations with worker representatives before workplace-deployments. The presidency suggests a compromise: merely informing worker representatives. Both the Parliament and Council introduced sustainability provisions, but the Parliament included environmental harms in assessing high-risk AI. The presidency aims to separate sustainability from high-risk requirements and make it part of the voluntary technical standards for AI providers.
Analyses
DIGITALEUROPE published a position paper for the AI Act trilogues. The paper compares the European Parliament's and the Council's mandates for the negotiations and offers recommendations for improving the legislation. It recommends, among other things: 1) that the definition of 'AI' should align with international frameworks like the OECD; 2) that exemptions for research and development and open-source should be included to promote innovation; 3) a technology-neutral risk-based approach, focused on high-risk use cases; 4) clear and precise definitions of prohibited practices to avoid unintended restrictions; 5) that the Parliament's 'significant risk' criterion should be combined with the Council's human oversight condition; 6) that the AI Act should align with existing European legislation to avoid disruptions in well-established sectors.
A group of European business leaders, startup founders, and investors made a public statement calling for the regulation of foundation models under the AI Act. The group shares the concerns raised by citizens and prominent AI scientists regarding the potential catastrophic consequences of foundation model development, sentiments echoed by influential figures like the European Commission President, the UN Secretary General, the US President, and the UK Prime Minister. The business representatives acknowledge their dependence on foundation models, but believe that regulatory efforts are necessary to address the high stakes involved. They assert that the Act should place proportionate requirements on foundation model developers, bearing in mind that these rules will apply in approximately two years’ time and should accommodate the evolving nature of AI models. The statement emphasises the need for effective and meaningful regulation to ensure AI safety and trustworthiness, with the goal of leading global practices.
Akshaya Asokan reported for BankInfoSecurity that the EU is moving towards establishing an EU AI Office to oversee the implementation of the AI Act, especially with regards to big-tech companies like OpenAI. Dragoş Tudorache, co-rapporteur of the AI Act, revealed that negotiators agree at least in principle to create this office, to act as a centralised agency with national subsidiaries. It will be responsible for hiring talent, building expertise, ensuring coordination among member states, and monitoring the activities of big-tech companies with a global presence. The EU recognises the need for a powerful enforcer to handle influential global tech companies effectively, based on the implementation of the General Data Protection Regulation. While a political agreement on the creation of this entity has been reached, its specific functions, financing, and governance at the European level remain undetermined. Concerns have been raised about the two-year enforcement gap in the AI Act, with some suggesting a more realistic enforcement timeline of 12 to 16 months. The EU is expected to reach a political agreement on the AI Act later this month, with passage into law likely early next year.
Samuel Curtis, Felicity Reddel, and Nicolas Moës, from The Future Society published a blueprint for the European AI Office, a centralised institution created under the AI Act to oversee and support the implementation and enforcement of regulation across the EU, notably on transnational issues like general purpose AI. The report explores the mechanisms essential for the effective, efficient, coherent, and legitimate functioning of this AI Office. This exploration is informed by desk research and insights from expert interviews. The report provides a set of recommendations encompassing legal, structural, financial, functional, and behavioural aspects, designed to guide the development and operations of the AI Office. Visual and summary aids provide an overview of the report's recommendations across five categories.
📰 Always impressed by the insightful summaries on new proposals & steps. Kudos to the Spanish presidency for leading with such impactful initiatives! 🇪🇸👏 #AIRegulation #EuropeanUpdates