The EU AI Act Newsletter #73: Scientific Panel Rules
The Commission has established rules for a new scientific advisory group of independent AI experts.
Welcome to the EU AI Act Newsletter, a brief biweekly newsletter by the Future of Life Institute providing you with up-to-date developments and analyses of the EU artificial intelligence law.
Legislative Process
Adoption of implementing act for the scientific panel: The European Commission has adopted rules for a new scientific advisory group of independent AI experts. This panel, mandated by the AI Act, will support the AI Office and national authorities in implementing and enforcing the legislation. The advisory body will provide technical advice on enforcement matters and can alert the AI Office about potential risks from general-purpose AI models. The Commission has adopted an implementing act outlining the panel's establishment and operational procedures. The next step involves launching a call for expressions of interest to select suitable experts for this governance role.
The final draft of the Code: The third and final draft of the General-Purpose AI Code of Practice has been published by independent experts, featuring a streamlined structure with refined commitments and measures. This version includes two commitments on transparency and copyright applicable to all general-purpose AI model providers, plus 16 additional commitments on safety and security only for providers of models classified as posing systemic risk. An interactive website has also been set up to facilitate stakeholder feedback. The draft introduces a user-friendly Model Documentation Form and simplifies copyright measures. Certain open-source model providers are exempted from transparency obligations in accordance with the AI Act. For a small number of providers of the most advanced models with potential systemic risks, the Code outlines requirements for model evaluations, incident reporting and cybersecurity obligations. The draft balances clear commitments with flexibility to adapt to evolving technology. Stakeholders can provide feedback until 30 March 2025, with further discussions planned according to the AI Office timeline. The final Code, expected in May, will serve as a compliance tool for the AI Act, incorporating state-of-the-art practices.
Analyses
Spain to impose fines for unlabelled AI-generated content: According to Reuters, Spain's government has approved a bill implementing the AI Act's guidelines, imposing severe penalties on companies failing to label AI-generated content properly. The legislation, which requires parliamentary approval, classifies improper labelling as a "serious offence" punishable by fines up to €35 million or 7% of global annual turnover. Digital Transformation Minister Oscar Lopez emphasised that AI can improve lives, but also spread misinformation and undermine democracy. The bill also prohibits subliminal techniques targeting vulnerable groups, such as chatbots encouraging gambling addictions or toys promoting dangerous challenges. Additionally, the legislation prevents AI-based classification of people through biometric data to determine access to benefits or assess crime risk. However, authorities retain permission to use real-time biometric surveillance in public spaces for national security purposes. The newly-established AI supervisory agency AESIA will enforce these regulations, with specific watchdogs overseeing cases involving data privacy, crime, elections, credit ratings, insurance or capital markets.
France creative sector brings copyright action against Meta: Samara Baboolal from JURIST summarised that several French publishing associations have filed a joint legal complaint against Meta in the Paris Judicial Court, alleging unauthorised use of copyrighted work to train its generative AI systems, in violation of the AI Act which took effect in August 2024. The National Publishing Union (SNE), the Société des Gens de Lettres (SGDL) and the National Union of Authors and Composers (SNAC) are pursuing this action based on fundamental principles, arguing that AI market development should not harm the cultural sector. The publishers seek the removal of unauthorised data directories used for AI training, stronger protections for creators as well as compensation for those whose work was used in the training. The AI Act requires generative AI systems to comply with transparency requirements and copyright law, disclose AI-generated content, prevent illegal content generation, and publish summaries of copyrighted training data. These are obligations which the complainants allege Meta has breached.
AI Act not ideal but workable: According to Luca Bertuzzi from MLex, Mistral AI CEO Arthur Mensch has described the AI Act as premature and overly tech-focused but ultimately workable. Speaking at a conference, he acknowledged implementation challenges, but noted ongoing collaboration with regulators. Despite previously lobbying the French government to exclude AI models from the Act's scope, arguing it could disadvantage European companies against international competitors, Mensch no longer considers regulation Europe's primary challenge. Instead, he highlighted the need for European businesses to transform their operations with AI more ambitiously, noting positive movement in this direction over recent months. Mensch observed increasing engagement from European companies seeking AI solutions to enhance global competitiveness, particularly in sectors where Europe leads, such as automotive, manufacturing and pharmaceuticals. He also identified market fragmentation as a significant European challenge, particularly in telecommunications, suggesting that consolidation of larger tech players would be beneficial.
EU lawmakers try to counter Washington lobbying: Eliza Gkritsi and Max Griera from POLITICO reported that the European Union politicians who helped develop the bloc's technology regulations recently visited Washington to counter criticism from the Trump administration and tech leaders. The EU delegation faced significant challenges as Trump's team characterised EU digital regulations as government censorship unfairly targeting American companies, and Trump threatened retaliatory tariffs against foreign restrictions on US tech giants. MEP Sandro Gozi described fighting against views "fuelled by Big Tech, starting with Elon Musk", whilst MEP Anna Cavazzini argued that attacks on EU laws represent only "powerful tech giants in Silicon Valley" rather than the broader industry. The delegation met with four members of Congress, including Republicans Jim Jordan, Mark Green and Scott Fitzgerald, and Democrat Don Beyer, attempting to explain the goals of the Digital Services Act, Digital Markets Act and Artificial Intelligence Act. Spanish MEP Pablo Arias Echeverría noted that US officials lacked detailed understanding of EU regulations, repeating generalised criticisms about free speech restrictions, anti-American business measures and innovation impediments.
The Third draft of the GEN AI Code of practice is not final. Final version is about to be presented until 25th of May - See the Timeline here https://digital-strategy.ec.europa.eu/en/policies/ai-code-practice
As a Spanish citizen living in Spain, I must admit I disagree with the increasingly strict regulatory approach the EU is taking towards technology. While I fully support the intention to protect users and ensure ethical development, I believe that some of these laws risk stifling innovation and overregulating a fast-evolving field. The narrative that these rules only target powerful American companies is not entirely unfounded—many of the burdens seem to disproportionately affect non-European firms, which could deter global collaboration and investment. I fear that the EU may be leaning too far into restrictive measures that could ultimately limit our own technological progress. I also believe it’s essential that regulators take into serious account the views of technical experts in AI, as well as those of actual AI users, to ensure that the regulations are both effective and grounded in real-world understanding.