The EU AI Act Newsletter #64: Draft on the Scientific Panel
The European Commission is seeking feedback on a draft act regarding the establishment of a scientific panel of independent experts under the AI Act.
Welcome to the EU AI Act Newsletter, a brief biweekly newsletter by the Future of Life Institute providing you with up-to-date developments and analyses of the EU artificial intelligence law.
Legislative Process
Implementing act on scientific panel: The European Commission is seeking feedback on a draft act regarding the establishment of a scientific panel of independent experts under the AI Act. This panel will advise and assist the AI Office and national market surveillance authorities in implementing and enforcing the Act. The feedback period runs from 18 October to 15 November, lasting four weeks. All feedback received will be published on the site and must comply with established feedback rules. The collected input will be considered when finalising the initiative, which sets out rules for the establishment and operation of this panel.
McNamara and Benifei to lead AI Act monitoring: Cynthia Kroet, Senior EU Policy Reporter at Euronews, reported that the European Parliament has appointed Michael McNamara and Brando Benifei to lead its AI monitoring group, which will oversee the AI Act's implementation. McNamara, representing the Committee on Civil Liberties, Justice and Home Affairs (LIBE), and Benifei, representing the Committee on Internal Market and Consumer Protection (IMCO), will serve as co-chairs. Benifei previously served as co-rapporteur for the AI Act, while McNamara joined Parliament in July following the European election. The Legal Affairs committee has requested to join the cross-parliamentary group but has not yet named a representative. The group's first meeting date has not been set, with most discussions expected to be private. Similar working groups established for the Digital Services Act and Digital Markets Act will continue with the new Parliament.
AI Office progress in hiring: According to POLITICO's Tech Reporter Pieter Haeck, the European Commission has filled half of its planned positions for the AI Office, with 83 staff currently employed and 17 more expected to join soon. The Office, which began operations in June within the digital department DG CONNECT, has a final target of 140 employees. Industry representatives have privately expressed concerns about understaffing given the Office's extensive responsibilities. The organisation is structured into five units, including one focused on "excellence in artificial intelligence and robotics" based in Luxembourg. Notably, the AI Safety unit, which is considered the most closely watched unit, currently lacks a dedicated head, with AI Office director Lucilla Sioli temporarily filling in the position.
Analysis
An LLM checker assessing compliance: Pascale Davies from Euronews reported that a new tool developed by ETH Zurich, Bulgaria's Institute for Computer Science, AI and Technology, and LatticeFlow AI has evaluated leading generative AI models' compliance with EU AI rules. The "LLM Checker" assessed models from Alibaba, Anthropic, OpenAI, Meta and Mistral AI, scoring them across various categories including cybersecurity, environmental well-being, privacy and data governance. While overall the models averaged scores of 0.75 or above, several fell short on discrimination and cybersecurity. OpenAI's GPT-4 Turbo scored 0.46 on discriminatory output and Alibaba's Cloud scored 0.37. However, most performed well with regards to harmful content and toxicity requirements. The European Commission welcomed the study and platform as a first step in translating the AI Act into technical requirements. The tool includes an open-source framework for evaluating LLMs against EU requirements.
Industry event summary on general-purpose AI rules: CCIA Europe hosted the second European AI Roundtable focused on drafting the Code of Practice for general-purpose AI (GPAI). The event brought together experts from AI firms, academia, governments, civil society and EU institutions. The legal expert Yann Padova identified four key risks in the process: 1) the challenge of reconciling views from nearly 1,000 stakeholders; 2) GPAI providers' limited representation (5%) in the drafting process despite their primary role under the Act; 3) risk of discussions straying beyond the Act's scope; and 4) concerns about potential compliance requirements not mandated by the Act. The next Roundtable, scheduled before the end of the year, will address the interplay between AI, privacy and data protection laws.
Instructions regarding high-risk AI systems: A working template for instructions in relation to high-risk AI systems under Article 13 of the AI Act has been developed by the Knowledge Centre Data & Society. Created through policy prototyping, the template aims to guide providers and deployers in meeting transparency requirements. The template offers insights into essential elements that must be included in the instructions for use and provides an example of it. Providers must complete the template with specific information about their high-risk AI system, ensuring that the instructions reflect the system's purpose, characteristics and risk profile. Deployers can use it to review instructions and request clarification if needed. The document includes practical instructions and the provider is also advised to check for any additional guidelines from competent authorities regarding Article 13.
AI standards for the Act: JRC published a brief discussing the key characteristics expected from upcoming standards that would support the implementation of the AI Act. The Act, adopted in August 2024, includes provisions for high-risk AI systems that will take effect after 2-3 years. European harmonised standards, when published in the EU Official Journal, will provide legal presumption of conformity for AI systems following these standards. CEN and CENELEC are leading European standardisation organisations drafting the necessary AI standards, upon the European Commission's request. Standards are important instruments for supporting the implementation of EU policies and legislation to ensure the protection of safety and fundamental rights. They also aim to establish equal conditions of competition, particularly for SMEs developing AI solutions. Despite ongoing drafting of requested standards and a planned update to the standardisation request, progress has been slower than expected. Challenges in reaching consensus on new work items and their scope have caused delays in standardisation committees. As consensus starts to emerge, progress will need to be consistent to meet deadlines.