The EU AI Act Newsletter #77: AI Office Tender
The AI Office will soon be looking for third-party contractors to support the monitoring of compliance and risk assessment of general-purpose AI models.
Welcome to the EU AI Act Newsletter, a brief biweekly newsletter by the Future of Life Institute providing you with up-to-date developments and analyses of the EU artificial intelligence law.
Legislative Process
AI Office AI safety tender: The AI Office will soon be looking for third-party contractors to provide technical assistance aimed at supporting the monitoring of compliance, in particular in its assessment of risks posed by general-purpose AI models at Union level, as authorised by Articles 89, 92 and 93 in the AI Act. The €9,080,000 tender is divided into six lots. Five lots address specific systemic risks: 1) CBRN, 2) cyber offence, 3) loss of control, 4) harmful manipulation and 5) sociotechnical risks. These lots involve risk modelling workshops, the development of evaluation tools, the creation of a reference procedure and reporting template for risk assessment, and ongoing risk monitoring services. The sixth lot focuses on agentic evaluation interface, providing software and infrastructure to evaluate general-purpose AI across diverse benchmarks. Interested parties can sign up for notification of further details related to this tender.
Analyses
How Big Tech weakens rules on advanced AI: According to an investigation by Corporate Europe Observatory and LobbyControl, Big Tech has heavily influenced the weakening of the Code of Practice for general-purpose AI models, a key implementation tool for the AI Act. Despite complaints from Google about model developers being "heavily outweighed by other stakeholders", these tech companies enjoyed privileged access to the drafting process. Nearly half of organisations invited to dedicated model provider workshops were US companies, including Google, Microsoft, Meta, Apple, Amazon, and well-funded AI firms like OpenAI, Anthropic and Hugging Face. Meanwhile, the 350 organisations representing civil society, academics and European businesses faced restricted participation and limited transparency about provider workshop discussions. US tech giants coordinated messaging claiming the Code represented "regulatory overreach" that would "stifle innovation" - rhetoric aligned with deregulatory political trends. Big Tech has successfully weaponised the US government to push back against EU digital rules, including with the Trump administration's executive order threatening tariffs against countries fining American tech companies and the US Mission to the EU reportedly pressuring the Commission on the Code of Practice.
US companies still engaging with Code despite Trump: Cynthia Kroet from Euronews reported that US technology companies remain "very proactive" in the Code of Practice development process, with no perceived change in attitude following the change in American administration, according to an AI Office official speaking to Euronews. The voluntary Code, designed to help general-purpose AI providers comply with the AI Act, has missed its 2 May publication deadline. The Commission extended consultations after receiving multiple requests from stakeholders to maintain engagement beyond the original timeframe. With approximately 1,000 participants involved through plenary sessions and workshops, the drafting process continues under thirteen appointed experts. The EU executive aims to publish it before 2 August when the relevant AI Act rules enter force. Recent lobbying allegations by Corporate Europe Observatory and LobbyControl suggest Big Tech enjoyed structural advantages during drafting, though the Commission maintains all participants had equal engagement opportunities. The EU official could not say whether it seems likely that companies will sign, but stressed that it’s important that they do. An alternative option, where businesses commit only to specific parts of the Code, has not been put on the table yet.
Enforcement hampered by lack of funding and expertise: Tonya Riley, Privacy Reporter for Bloomberg Law, reported that financial constraints and talent shortages will challenge AI Act enforcement, according to European Parliament digital policy advisor Kai Zenner. Speaking at a George Washington University conference, Zenner highlighted that "most member states are almost broke" and losing technical expertise to better-paying tech companies. "This combination of lack of capital finance and also lack of talent will be really one of the main challenges," Zenner stated, noting that regulators need "real experts" to oversee AI companies effectively. With the 2 August 2025 deadline approaching for designating national enforcement authorities, Zenner suggested member states facing budget crises may prioritise AI innovation over regulation. Even financially stable countries may hesitate to enforce against AI companies they are investing in for economic growth. Zenner, who helped draft the Act, expressed disappointment with the final version, calling it "vague" and "contradicting itself", questioning whether it would work without hampering innovation.
Italy and Hungary fail to appoint fundamental rights bodies: Cynthia Kroet from Euronews reported that Italy and Hungary have failed to meet the November 2024 deadline to appoint bodies ensuring fundamental rights protection in AI deployment, according to European Commission data. The Commission is currently in contact with both member states and supporting them as they are preparing to meet this obligation. All 27 EU member states were required to identify and publicly list responsible authorities by 2 November 2024 under the AI Act. The number of appointed authorities varies significantly across member states based on national implementation approaches. Bulgaria listed nine authorities, including its national Ombudsman and Data Protection Authority, whilst Portugal identified 14 bodies. Slovakia appointed just two authorities, whilst Spain designated 22. The Commission is working with member states to establish consistent understanding of appropriate authorities and ensure effective cooperation between these public bodies and future market surveillance authorities.
Korea vs EU AI legislation: Sakshi Shivhare, Policy Associate for Asia-Pacific at the Future of Privacy Forum, and Kwang Bae Park, Partner at Lee & Ko, published a comparison of South Korea's AI Framework Act and the EU's AI Act. Whilst both frameworks employ tiered classification and transparency requirements, South Korea's approach differs in several areas. The Framework Act features simplified risk categorisation including being without prohibited AI practices, lower financial penalties, and the establishment of initiatives and government bodies specifically designed to promote AI development and deployment. Understanding these similarities and differences is essential for developing global compliance strategies, particularly as many technology companies must navigate both regulatory frameworks simultaneously.
Useful as always Risto, thank you!
Super interesting information — as always, it's a great and valuable summary of current events. Thank you! Just a quick note:
If the information in the official link https://digital.gob.es/dam/es/portalmtdfp/DigitalizacionIA/AuthoritiesFundamentalRights-Spain.pdf is not outdated, I’d like to make a small clarification: I believe the data given in the report by Cynthia Kroet from Euronews is incorrect regarding my country: the number of appointed fundamental rights bodies in Spain is 12, not 22.