The EU AI Act Newsletter #66: Huge Edition
Independent experts have presented the initial draft of the General-Purpose AI Code of Practice and consultation on AI Act prohibitions and AI system definition is now open.
Welcome to the EU AI Act Newsletter, a brief biweekly newsletter by the Future of Life Institute providing you with up-to-date developments and analyses of the EU artificial intelligence law.
Legislative Process
First draft of the Code of Practice available now: Independent experts have presented the initial draft of the General-Purpose AI Code of Practice, having completed the first of four drafting rounds scheduled before April 2025. This draft was discussed with approximately a thousand stakeholders, including EU Member State representatives and international observers, in dedicated working group meetings. The document, facilitated by the European AI Office, was developed by appointed Chairs and Vice-Chairs of four thematic working groups, incorporating input from general-purpose AI model providers and considering international approaches. The draft outlines guiding principles and objectives, with open questions highlighting areas for further development. The final Code will establish rules for transparency and copyright compliance for providers of general-purpose AI models. For providers of the most advanced models that could pose systemic risks, it will also detail a taxonomy of systemic risks, risk assessment measures, as well as technical and governance mitigation measures. The drafting principles emphasise proportionate measures based on provider size, with simplified compliance options for SMEs and exemptions for open-source models.
Consultation on AI Act prohibitions and AI system definition: The European AI Office has initiated a targeted consultation process regarding forthcoming guidelines on the AI system definition and the implementation of practices deemed unacceptable under the AI Act. These guidelines aim to assist national competent authorities, providers and deployers in adhering to the Act before the relevant provisions take effect on 2 February 2025. The AI Office is seeking input from a broad range of stakeholders, including AI systems providers, businesses, national authorities, academia, research institutions and civil society organisations. While the Act already establishes legal concepts concerning AI system definition and prohibited practices, this consultation seeks practical examples from stakeholders to enhance the guidelines' clarity through real-world use cases. The feedback received will contribute to the Commission's final guidelines, scheduled for publication in early 2025. The consultation period will run for four weeks, concluding on 11 December 2024.
AI Office participates in inaugural International Network of AI Safety Institutes meeting: The EU AI Office participated in the inaugural meeting of the International Network of AI Safety Institutes in San Francisco, structured around three distinct tracks. Track 1 addressed risks from AI-generated synthetic content, focusing on digital transparency techniques and safeguards to prevent harmful outputs. Best practices for transparency and risk mitigation were discussed, acknowledging the need for complementary approaches including educational and regulatory measures. Track 2 centred on evaluating and testing foundation models, aiming to develop a shared understanding of evaluation methods. AI Safety Institutes presented a prototype joint testing exercise as groundwork for future development. Track 3 endorsed a Joint Statement on Risk Assessment of Advanced AI Systems, coordinated by the EU AI Office and the UK. This technical document establishes a shared basis for developing comprehensive and effective risk assessment strategies. The outcomes of the meeting will inform discussions at the Paris AI Action Summit in February 2025.
General-purpose AI questions and answers: The AI Office has published a Q&A document to help interpret specific provisions of the AI Act, while noting that only EU Courts have official interpretative authority. The FAQ addresses fundamental questions about general-purpose AI models, including their definition, systemic risks and provider obligations. It clarifies requirements for open-source models, research and development activities, and model modifications through fine-tuning. Significant attention is given to the General-Purpose AI Code of Practice, explaining its scope, limitations and relationship with AI systems. The document addresses how the Code accommodates startups' needs and outlines its finalisation timeline, legal implications and review processes. The FAQ also covers the AI Office's enforcement powers.
Analyses
The Code offers a unique opportunity for the EU: Yoshua Bengio, Turing-award-winning computer scientist and Nuria Oliver, Director of the ELLIS Alicante Foundation, two of thirteen chairs and vice-chairs overseeing the EU's Code of Practice for general-purpose AI (GPAI), argued in Euractiv that the Code represents a unique opportunity to implement the key rules of the AI Act successfully. The Code aims to translate the AI Act's principles into actionable measures and metrics, providing legal clarity across the entire EU. This harmonised approach contrasts with the fragmented state-level regulations in the US, benefitting companies operating in the European market. Bengio and Oliver highlight guiding principles they are following for the drafting: alignment with EU rights and values, alignment with the AI Act and international approach, proportionality to risks and provider size, future-proofing, and support for the AI safety ecosystem. The drafting process is deliberately iterative, involving hundreds of stakeholders providing feedback over several months. The authors’ goal is to produce a Code that facilitates GPAI innovation while protecting fundamental rights, democracy and the rule of law.
Commentary on the Code of Practice: Miles Brundage, former Head of Policy Research and Senior Advisor for AGI Readiness at OpenAI, and Dean Ball, Research Fellow at the Mercatus Center, published their comments on the first Code of Practice draft. The authors broadly support aspects of the Code of Practice while noting that many questions remain unanswered in this early draft. They commend the transparency measures outlined, which specify information sharing with authorities and downstream providers, though suggest focusing on transparency with clear benefits rather than transparency for its own sake. They argue the current definition of "systemic risks" is too broad and lacks measurement criteria. While some risks, like cyber-offence capabilities, are straightforward to evaluate, others, such as societal impact, present significant assessment challenges. Regarding systemic risk documentation, they suggest streamlining by incorporating Safety and Security Framework (SSFs) updates rather than creating additional Safety and Security Reports (SSRs) to improve efficiency that way. On whistleblower protections, they recommend specific expansions including: coverage for all full-time employees of frontier AI developers, protection from employer retaliation, clear reporting processes to relevant authorities, and robust information security measures for whistleblowing reports.
Founders and investors think that AI and privacy laws stand in the way of growth: POLITICO journalists Pieter Haeck and Giovanna Coi reported on a survey of Europe's tech sector revealing widespread concern about EU regulations hampering business growth. The study by VC firm Atomico, gathering approximately 3,500 responses, shows significant criticism of both the General Data Protection Regulation (GDPR) and the AI Act. 60% of respondents reported GDPR had negatively impacted the startup environment, with only 15% citing positive effects. Similarly, 53% viewed the AI Act negatively, while just 20% saw it positively. The findings align with former Italian Prime Minister Mario Draghi's recent economic report, which identified Europe's complex regulatory framework as a barrier to competing with the US and China. This is further evidenced by a substantial funding gap, with European AI companies raising $11 billion compared to $47 billion by US counterparts. The European Commission appears to be responding, with President von der Leyen highlighting "AI innovation" as a future priority.
Recommendations for the Code of Practice: ALLAI published their recommendations for the Code of Practice. The recommendations for a Code of Practice for Trustworthy General-Purpose AI (GPAI) emphasise three main pillars: lawfulness, ethical alignment and robustness. The Code should align with AI Act regulations while supporting innovation and providing clarity on the distinction between specific and general-purpose AI models. Regarding systemic risks, the recommendations suggest expanding criteria to include model generality and adaptability. The assessment should consider both demonstrated and emergent capabilities and the framework should be flexible enough to adapt to nascent AI paradigms. The authors state that workplace displacement, model concentration, environmental impact and providers becoming 'too big to fail' should all also be considered. The recommendations also identify new sources of systemic risk, such as human error, deception in oversight, unexpected capability jumps and model misalignment. Proposed risk assessment measures in the brief include flexible thresholds, cumulative risk assessment, and continuous evaluation of model trustworthiness and generality levels.
Jobs
The AI Office is hiring a Lead Scientific Advisor for AI: The European AI Office is hiring for the Lead Scientific Advisor for AI. The application deadline is 13 December 2024. Based on the European Union Employment Advisor, the monthly basic salary for this role (level AD13) is about €13,500-15,000. You can apply here. The Lead Scientific Adviser for AI should ensure an advanced level of scientific understanding on General-Purpose AI. They will lead the scientific approach on General-Purpose AI for the AI Office, ensuring scientific rigour and integrity of AI initiatives. They will particularly focus on the testing and evaluation of General-Purpose AI models, in close collaboration with the ‘Safety Unit’ of the AI Office. Eligibility requirements: citizenship of one of the Member States of the European Union; university degree or diploma; professional experience of at least fifteen years; knowledge of one of the EU languages and a satisfactory knowledge of another of the EU languages; and must not have reached regular retirement age.
I think that it is appropriate to highlight that providers have found some satisfaction with the harmonization of requirements across member states. I think that continuing to have conversations around the development of codes of practice and alignment on ways to satisfy obligations are more productive than continuing to argue over the disadvantages of regulation in the first place.
Looking forward to hearing about who the Lead Scientific Advisor for AI will be eventually.