The EU AI Act Newsletter #86: Concerns Around GPT-5 Compliance
Concerns have been raised about OpenAI’s compliance with the EU AI Act requirements for its recently released GPT-5 model, particularly regarding the disclosure of training data.
Welcome to the EU AI Act Newsletter, a brief biweekly newsletter by the Future of Life Institute providing you with up-to-date developments and analyses of the EU artificial intelligence law.
Legislative Process
Commission consultation on transparent AI systems: The European Commission has initiated a consultation to develop guidelines and a Code of Practice for transparent AI systems, particularly focusing on supporting deployers and providers of generative AI systems to detect and label AI generated or manipulated content. Under the AI Act, deployers and providers of generative AI must inform users when they are interacting with AI systems, including being exposed to emotion recognition and biometric categorisation systems, or when they encounter AI-generated content. The Commission is seeking input from a broad range of stakeholders, including AI providers, deployers, public and private organisations, academics, civil society representatives, supervisory authorities and citizens. The consultation deadline is 2 October 2025, alongside a simultaneous call for expressions of interest for stakeholders wishing to participate in creating the Code of Practice. These transparency obligations, part of the EU's effort to promote responsible and trustworthy AI, will become effective from 2 August 2026.
German privacy watchdogs are upset by the implementation: According to Euractiv's Maximilian Henning, German data protection authorities have strongly criticised the government's draft implementation law for the AI Act, arguing it inappropriately diminishes their authority. The AI Act employs a risk-based regulatory framework overseen by designated national authorities. The main concern raised by 17 German state data protection authorities relates to the supervision of AI systems in sensitive areas including law enforcement, border management, justice and democracy. The draft law assigns oversight responsibilities to the telecommunications regulator (BNetzA), which the data protection authorities argue contradicts the AI Act's stipulation that the data protection authorities should instead oversee high-risk AI applications in these sensitive domains. Meike Kamp, head of Berlin's privacy authority, warned that delegating these responsibilities to BNetzA would result in a "massive weakening of fundamental rights" and criticised the draft law's apparent disregard for the importance of fundamental rights.
Analyses
ChatGPT may not be following EU's rules yet: Questions have arisen about OpenAI's compliance with AI Act requirements for its newly released GPT-5 model, particularly regarding training data disclosure obligations, as Euractiv's Maximilian Henning reports. The AI Act requires general-purpose AI developers to publish summaries of their training data for which the AI Office provided a template in July. While models released before 2 August 2025 have until 2027 to comply, those released after must comply immediately. GPT-5, released on 7 August 2025, appears to lack the required training data summary and copyright policy, despite OpenAI being a signatory to the EU's Code of Practice. According to Petar Tsankov, CEO of AI compliance company LatticeFlow, the model likely qualifies for the “systemic risk” classification, requiring model evaluations and the management of potential systemic risks. The European Commission indicates that GPT-5's compliance requirements depend on whether it's considered a new model under the law, which the AI Office is currently assessing. However, enforcement will not begin until August 2026, giving OpenAI time to address any compliance issues.
The EU AI Office is facing hiring challenges: Peder Schaefer, freelance reporter, wrote in Transformer that the AI Office, despite its crucial role in implementing the AI Act, is facing significant staffing challenges. While it has attracted some notable talent and currently employs over 125 staff members, with plans to hire 35 more by the end of the year, key leadership positions remain unfilled. The Office, responsible for over 100 tasks including enforcement of the Code of Practice and levying substantial fines for non-compliance, struggles with recruitment due to uncompetitive salaries, slow hiring processes and pressure to ensure representation from member states. Current postings offer between $55,000 and $120,000, which, despite tax benefits, falls far short of private sector compensation where technical staff can earn millions. The staffing shortage has become particularly pressing since the August 2 implementation of general purpose AI rules. MEP Axel Voss suggests the compliance and safety units alone need 200 staff, significantly more than currently proposed.
The EU is still grappling with the complexities of AI copyright: Bertin Martens, Senior Fellow at Brugel, wrote that the EU's implementation of the AI Act, including its Code of Practice (CoP), reveals ongoing tensions between copyright law and AI development needs. Martens argues that while transparency and safety requirements are easy, copyright issues present significant challenges. Copyright obligations reduce the available data and, through licensing requirements, increase the cost of model training data. A prohibition on reproducing copyright-protected content in model outputs makes sense, as does training on lawfully accessible data. However, managing dataset transparency and handling the increasing number of copyright opt-outs proves problematic. EU regulators face a dilemma: strict copyright enforcement could hamper EU competitiveness in AI development, while immediate legal reform is impractical. The subtle weakening of copyright enforcement in the CoP has attracted most major AI developers as signatories, except Meta and xAI. A more satisfactory policy would require a debate on the role of AI in enhancing learning, research and innovation, as the current copyright framework – dominated by media industries representing less than 4% of GDP – may impede progress.
European industry isn't showing up for standards development: A key figure in EU AI standards development, according to Euractiv's Maximilian Henning, has criticised European industry's lack of participation in creating technical standards crucial for implementing the AI Act. Piercosma Bisconti, who leads the drafting of an “AI trustworthiness framework” for CEN and CENELEC, publicly criticised companies for their absence from the standards-setting process. This criticism was particularly directed at signatories of the “AI champions initiative”, which includes major firms such as Airbus, Siemens, Spotify, and SAP. The AI Act requires detailed technical standards to convert its broad principles into concrete guidelines for AI developers. However, slow progress in standards development has prompted industry and EU governments to request implementation delays. Bisconti, who is also co-founder of Italian AI company Dexai, specifically criticised companies that have called for "stopping the clock" on the AI Act while simultaneously failing to engage in the standards development process, noting that "EU industry is barely at the table."



The EU AI Office’s hiring challenges are concerning and not something I had really considered before. Does anyone have ideas on how to address this? One possibility, though I have not researched its feasibility or effectiveness, would be a program where the EU funds study at top universities in return for a commitment to work at the AI Office for a set number of years after graduation.
I also know the EU has only supporting competence in education, but I think more attention should go to improving early science education so that fewer students feel discouraged. An approach that values experimentation, accepts mistakes, and builds genuine understanding could help. I believe we need more initiatives in this area.
The difficulty with both of these ideas is that they would only pay off in the medium to long term. In the short term, perhaps better branding could help by presenting a role at the EU AI Office as morally ambitious and exciting rather than bureaucratic.