The EU AI Act Newsletter #61: Council of Europe AI Convention vs the Act
Comparison of the Council of Europe's AI convention with the AI Act. The AI Office receives nearly 1,000 expressions of interest to participate in the code of practice.
Welcome to the EU AI Act Newsletter, a brief biweekly newsletter by the Future of Life Institute providing you with up-to-date developments and analyses of the EU artificial intelligence law.
Legislative Process
Strong interest for the first general-purpose AI code of practice: The AI Office has received nearly 1,000 expressions of interest from organisations and individuals worldwide to participate in drafting the first General-Purpose AI Code of Practice. The drafting process will commence with an online kick-off plenary on 30 September. The AI Office is currently verifying applicant eligibility based on submitted and public information, with participation confirmations to follow in due time. The Code will detail AI Act rules for the providers of general-purpose AI models, including those with systemic risks. A multi-stakeholder consultation, with submissions due by 18 September, will inform the drafting process.
Controversy over the hiring of a lead scientific adviser for AI: Euractiv's Tech Editor Eliza Gritsi reported that the European Commission's search for an internal lead scientific adviser for AI has sparked controversy. This role, under the Directorate-General for Communications Networks, Content and Technology (CNECT), is separate from the AI Office but will assist in monitoring technological developments, particularly regarding powerful general-purpose AI models. The position, announced in May, involves interfacing with a scientific panel and advising on innovation policy. Recently, the internal hiring process has faced criticism from Kai Zenner and Svenja Hahn from the European Parliament, who argue it contradicts the spirit of previous agreements and the intended structure of the AI Office. A Commission official clarified that the role is open to all EU institution officials, including agencies like ENISA, and may later be opened to external candidates if unfilled internally. The official emphasised that no political agreement existed on the hiring process for this position, which was created after AI Act negotiations concluded.
The first official AI Board meeting: The European Commission hosted the first official meeting of the AI Board following the AI Act's entry into force on 1 August. The Board, comprising high-level representatives from the Commission and EU Member States, discussed how to enhance AI development in the EU and AI Act implementation. The European Data Protection Supervisor and EEA/EFTA representatives from Norway, Liechtenstein, and Iceland attended as observers, with the AI Office providing secretariat services. Key focus areas included establishing the Board's organisation and rules, progress updates on EU AI policy, the AI Act's implementation progress, and exchanging best practices for national AI governance. The Commission and Member States aim to ensure a timely setup of the AI governance framework and effective implementation of the Act. This meeting followed the preparatory session on 19 June.
Analysis
How Europe is shaping AI for human rights: Virginia Dignum, Director of the AI Policy Lab at Umeå University, wrote an article comparing the Council of Europe's AI convention with the AI Act. The Council of Europe Framework Convention on AI, Human Rights, Democracy and the Rule of Law was opened for signatures from 5 September 2024. It emphasises transparency, accountability, risk management and protection for vulnerable groups. The AI Act takes a more market-centric approach, providing clear regulatory guidelines for businesses while protecting consumer rights. Its risk-based framework allows for differentiated oversight based on the risk posed by AI applications, particularly in sectors like healthcare and transportation. While the AI Act excels in its precise framework fostering compliance and ethical AI development, the Council of Europe convention has a broader scope. It emphasises human rights, democracy and the rule of law in all sectors, going beyond economic concerns.
AI Office needs top scientific talent, not familiar faces? Alex Petropoulos, Advanced AI Analyst at ICFG, and Max Reddel, Advanced AI Director at the ICFG published an op-ed in Euractiv arguing that the AI Office faces a decisive moment, with reports suggesting the European Commission may fill the crucial Lead Scientific Advisor role internally rather than externally. Petropoulos and Reddel claim that this decision could significantly impact the Office's ability to attract top scientific and technical talent to govern AI in Europe effectively. They reckon that finding a suitable internal candidate with the necessary expertise in AI science is highly unlikely and qualified individuals could earn significantly more in the private sector. The authors state that the importance of this position lies in its influence on talent acquisition, which is crucial for addressing complex regulatory challenges, such as assessing systemic risks posed by general-purpose AI models. The UK and US have prioritised hiring and have successfully recruited former DeepMind and OpenAI researchers for similar roles. In contrast, the EU AI Office has yet to appoint key positions like the Head of Unit for AI Safety and Lead Scientific Advisor.
How to design the code of practice? A study by Yann Padova and Sebastian Thess, lawyers from the law firm Wilson Sonsini Goodrich & Rosat commissioned by CCIA Europe, provides ten recommendations for the development of the AI Act Code of Practice for General-Purpose AI (GPAI) providers: 1) incentivise GPAI providers to participate in self-regulation rather than relying solely on in-house compliance; 2) align the Code with international approaches and standards to prevent fragmentation; 3) provide clear and adaptable guidance on GPAI provider obligations, including systemic risk domains and safeguards; 4) avoid issues outside of scope, including interpretation of compliance with Union law on copyright; 5) where the AI Act allows flexibility for different compliance measures, align with international approaches and technical standards; 6) determine the right level of granularity for each obligation and account for provider expertise; 7) involve GPAI providers in setting the right level of granularity; 8) ensure AI Office involvement in drafting the Code; 9) reflect on different stakeholder roles in the drafting process participation; and 10) organise a clear and efficient drafting process with precise milestones.