The EU AI Act Newsletter #76: Consultation on General-Purpose AI
The European Commission has opened a targeted consultation seeking stakeholder input to clarify the rules for general-purpose AI models.
Welcome to the EU AI Act Newsletter, a brief biweekly newsletter by the Future of Life Institute providing you with up-to-date developments and analyses of the EU artificial intelligence law.
Legislative Process
Commission launches consultation on general-purpose AI model rules: The European Commission has opened a targeted consultation seeking stakeholder input to clarify the rules for general-purpose AI (GPAI) models. This feedback will inform Commission guidelines explaining key concepts and addressing fundamental questions about the AI Act's GPAI provisions around definitions, provider responsibilities, and market placement requirements. They will detail how the AI Office will support compliance and explain how an approved Code of Practice could reduce administrative burdens while serving as a regulatory compliance benchmark. All stakeholders, including GPAI providers, downstream providers of AI systems, civil society, academia, other experts and public authorities, can submit feedback until 22 May. While not legally binding, the guidelines will reveal how the Commission plans to interpret and enforce GPAI rules under the AI Act. Both these guidelines and the final Code of Practice are expected before August 2025. Summary of the preliminary guidelines (published alongside the consultation) clarifying the scope of obligations for providers of GPAI models can be found here.
Commission launches tender for AI Act Service Desk: The European Commission has issued a call for tender to establish an external team for the upcoming AI Act Service Desk, designed to support smooth implementation of the AI Act. The Service Desk will function as an information hub providing clear guidance on the applications of the AI Act and offering targeted answers to stakeholder queries. Through the AI Act Service Desk, stakeholders will be able to submit questions directly to the AI Office. The external team being tendered will work closely with the AI Office to address these inquiries. This initiative supports the AI Act's objectives of enhancing public trust in AI technology whilst providing businesses with the legal certainty needed for European AI scaling and deployment. The tender remains open until 19 May 2025. The Service Desk will launch in summer 2025. Responses will be provided in all EU languages via an online platform currently under development.
Analyses
The US government's recommendations for the EU’s code of practice: Luca Bertuzzi of MLex reported that the US administration has formally requested that the European Commission address the flaws in the code of practice, according to a démarche document reviewed by MLex. According to Bertuzzi, while acknowledging improvements in the latest code of practice draft, American officials are calling for substantial amendments to enhance flexibility, reduce prescriptiveness, strengthen trade secret protections, better align with the AI Act itself, and clarify its relationship with EU copyright law. The US critique highlights several practical implementation challenges, particularly for open-weights models, including monitoring, incident reporting and staging requirements. The letter also raises concerns about regulatory discrimination against predominantly US-based large AI developers through disproportionate regulatory burdens that could discourage growth. Additional concerns include elements exceeding the AI Act's provisions, unclear application to downstream players who modify models, and terminology misaligned with international scientific consensus.
Commission ready to step in if AI standards delayed: Cynthia Kroet from Euronews reported that the European Commission may develop "alternative solutions" to support companies seeking to demonstrate compliance with the AI Act, in case technical standards will not be ready on schedule. CEN-CENELEC, comprising 34 national standardisation bodies tasked with drafting AI Act standards, has said that deliverables initially due by August 2025 will now be delayed until 2026. "If needed, to address delays and/or possible gaps, the Commission might consider temporary alternative solutions," stated Commission spokesperson Thomas Regnier, emphasising that whilst these standards "are not mandatory," they "will make compliance efforts much easier for providers of high-risk AI systems". Companies can still develop high-risk AI systems without these standards, though their availability would significantly simplify the compliance process. The standards drafts expected this year must undergo mandatory editing, Commission assessment, consultation and voting procedures before finalisation.
What is behind Europe’s push to simplify? Ramsha Jahangir, an Associate Editor at Tech Policy Press published an analysis of how the European Union's longstanding position as a global 'super-regulator' in digital governance is facing a pivotal shift as it pursues a regulatory 'simplification' agenda. This initiative, whilst still taking shape, is expected to adjust requirements for smaller firms, reduce administrative burdens, and ensure regulatory coherence. However, this move coincides with significant political pressure both internally and externally. Thirteen EU member states have signed a declaration advocating for a reviewed digital rulebook and the removal of regulatory barriers, driven by concerns about Europe's competitive position against US and Chinese dominance in AI and tech infrastructure. Whilst experts acknowledge the complexity of EU regulatory frameworks, especially for SMEs, many question whether deregulation will genuinely improve competitiveness. Critics argue that structural weaknesses such as market fragmentation and underdeveloped capital ecosystems, rather than regulation itself, are the primary obstacles to European innovation. There are growing concerns that 'simplification' could potentially undermine key protections in landmark legislation like the AI Act and GDPR.
Copyright issues in the training of general-purpose AI: Tristan Marcelin and Filippo Cassetti of the European Parliamentary Research Service discussed the two key copyright provisions in the AI Act that affect general-purpose AI (GPAI) providers. Article 53(1)(c) requires compliance with copyright law and the Text and Data Mining (TDM) opt-out exception from the Copyright Directive, applicable regardless of where training occurs. Article 53(1)(d) mandates publication of a sufficiently detailed summary explaining the content used for training. Researchers note that the legislation inadequately addresses AI and intellectual property issues, particularly regarding copyrighted materials in training datasets. The Copyright Directive's TDM exceptions remain unclear, creating legal uncertainty. Council findings from December 2024 reveal that several Member States believe AI training exceeds TDM exception scope, though most oppose immediate new legislation, preferring to monitor existing frameworks. Commissioner Henna Virkkunen suggested that the Commission should explore licensing mechanisms between creative industries and AI companies. While the forthcoming GPAI Code of Practice cannot modify the copyright framework, it may bridge the gap until the June 2026 Copyright Directive review.
I believe the AI Act is necessary for the success of AI itself. I do appreciate the attention towards simplification and adding firm size in compliance and regulation equation. I've gained interest in this topic and I am conducting a field study on its impact https://www.ai-compliance.cloud/
define 'stakeholder'