The EU AI Act Newsletter #83: GPAI Rules Now Apply
AI Act obligations for providers of general-purpose AI models have entered into application across the EU, bringing enhanced transparency, safety and accountability.
Welcome to the EU AI Act Newsletter, a brief biweekly newsletter by the Future of Life Institute providing you with up-to-date developments and analyses of the EU artificial intelligence law.
Legislative Process
General-purpose AI rules now in effect across the bloc: AI Act obligations for providers of general-purpose AI (GPAI) models have entered into application across the EU, bringing enhanced transparency, safety and accountability. The rules aim to ensure clearer information about the training of AI models, better copyright protection enforcement, and more responsible AI development. GPAI models are defined as those trained with over 10^23 FLOP and capable of generating language. From 2 August, providers must comply with transparency and copyright obligations when placing GPAI models on the EU market. Models already available before 2 August 2025 must ensure compliance by 2 August 2027. Providers of advanced models exceeding 10^25 FLOP face additional obligations including notifying the Commission and enhanced safety and security requirements.
Commission publishes a summary template for GPAI training data: The European Commission has released a template to help General-Purpose AI providers summarise content used to train their models. This aims to be a simple, uniform and effective method for GPAI providers to increase transparency in compliance with the AI Act. Executive Vice-President Henna Virkkunen described the template as another important step towards trustworthy and transparent AI. This template supports GPAI providers in complying with the AI Act while building trust to unlock the full potential of AI for the economy and society. General-purpose AI models are trained with large data quantities, but limited information exists regarding the origin of this data. The public summary will provide a comprehensive overview of training data, list main data collections, and explain other sources used. The template will also assist parties with legitimate interests, such as copyright holders, in exercising their rights under Union law.
First Code of Practice signatories announced: The General-Purpose AI Code of Practice has been confirmed by the Commission and AI Board as an adequate voluntary tool for GPAI providers to demonstrate compliance with the AI Act. Twenty-six organisations have signed the full code, including major technology companies such as Amazon, Anthropic, Google, IBM, Microsoft, and OpenAI, alongside firms like Aleph Alpha, Cohere, Mistral AI, and various specialised AI companies. Additionally, xAI signed only the Safety and Security Chapter, meaning it must demonstrate compliance with AI Act transparency and copyright obligations through alternative adequate means rather than the full code framework. The Commission notes that some signatories may not be immediately apparent, but it continuously updates the list as signatures are confirmed.
Analyses
US Big Tech largely embraces EU AI code despite initial concerns: According to Pieter Haeck from POLITICO, the EU's General-Purpose AI Code of Practice has secured widespread support from US Big Tech and other AI leaders, contrary to expectations of turbulence. Almost all companies in scope signed ahead of the 2 August deadline when new AI Act obligations take effect. Major AI developers including OpenAI, Google and Microsoft have committed to transparency, copyright and safety rules by adopting the Commission's voluntary code, representing a win for the European Commission amid pressure on the bloc's tech regulations from the US. Meta stands as the only major holdout, refusing to sign. The broad support was not guaranteed during the year-long drafting process, with lingering concerns about disclosing training data that could empower rights-holders to claim remuneration. Google warned against departures from EU copyright law and requirements exposing trade secrets. xAI criticised copyright provisions as “clearly over-reach” and refused to sign that chapter. This signals potential future tensions between Big Tech and Brussels, with the Commission planning a copyright law review by mid-2026.
Does the EU AI Code set a new global safety standard? Mia Hoffmann, a Research Fellow at Georgetown’s Center for Security and Emerging Technology (CSET), reviewed the Code of Practice measures in the safety and security chapter. Comparing them to existing practices, Hoffmann assessed their potential global impact. Recent assessments have found frontier AI companies' risk management processes lacking across most providers. Acceptable risk thresholds appear universally undefined, leaving providers substantial discretion over societal risk levels when deploying models, potentially allowing commercial interests to override caution. Providers selectively choose which risks to mitigate, with less than half conducting substantive testing for dangerous capabilities linked to large-scale bio- or cyber-terrorism. The Code's measures appear significantly more rigorous and comprehensive than current best practices. Whether or not the chapter will have a global effect depends on reception. Implementation at the model level increases chances of worldwide influence, as training costs for models exceeding 10^25 FLOPs make jurisdiction-specific development unlikely. However, voluntary adherence allows providers to develop alternative compliance frameworks. Regardless, whilst Code adherence is voluntary, AI Act compliance remains mandatory, signalling expected depth, rigour and comprehensiveness for alternative compliance measures.
Creative groups say AI Act is inadequate for copyright protection: Anna Desmarais from Euronews reported that as parts of the AI Act come into force, creative industry groups argue it contains loopholes that fail to protect artists from AI training on their copyrighted material. Organisations including the European Composer and Songwriter Alliance (ECSA) and European Grouping of Societies of Authors and Composers (GESAC) say creators lack clear opt-out mechanisms or payment when tech companies use their work. Under EU copyright law, companies can use copyrighted materials for text and data mining unless creators reserve their rights, but the opt-out process remains unclear. ECSA's Marc du Moulin noted that artists do not know how to opt out whilst their work is already being used, calling it "putting the cart before the horse." The Act's transparency requirements only apply prospectively, making retrospective payment claims difficult. GESAC's Adriana Moscono said that some members have unsuccessfully attempted licensing negotiations with AI companies, receiving no response. Both groups want the Commission to clarify opt-out rules and mandate collective licensing negotiations. Germany's GEMA has filed two copyright lawsuits against OpenAI and Suno AI, potentially setting important legal precedents.
The EU is trying to create real AI safety standards, but the voluntary nature feels like a classic regulatory compromise that might sound tough but lets companies wiggle out through "alternative frameworks."
We'll see if this actually changes anything on a practical level or just creates more paperwork for compliance teams.