The EU AI Act Newsletter #79: Consultation on High-Risk AI
The European Commission has initiated a public consultation regarding the implementation of regulations for high-risk AI systems under the AI Act.
Welcome to the EU AI Act Newsletter, a brief biweekly newsletter by the Future of Life Institute providing you with up-to-date developments and analyses of the EU artificial intelligence law.
Legislative Process
Commission launches public consultation on high-risk AI systems: The European Commission has initiated a public consultation regarding the implementation of rules for high-risk AI systems under the AI Act. The consultation aims to gather practical examples and clarify issues surrounding high-risk AI systems. This information will inform forthcoming Commission guidelines on the classification of high-risk AI systems and their associated requirements. It will also examine responsibilities throughout the AI value chain. The Act defines high-risk AI systems in two categories: those important for product safety under EU harmonised legislation on product safety, and those that could significantly impact people's health, safety, or fundamental rights in specific scenarios outlined in the Act. The Commission welcomes input from a broad range of stakeholders, including providers and developers of high-risk AI systems, businesses and public authorities using such systems, as well as academia, research institutions, civil society, governments, supervisory authorities, and citizens in general. The consultation period runs for six weeks, concluding on 18 July 2025.
EU could postpone some AI rules: Mathieu Pollet and Pieter Haeck from POLITICO reported that the European Commission's technology chief, Executive Vice President Henna Virkkunen, has said that potential delays for certain aspects of the AI Act are possible if standards and guidelines are not ready in time during a meeting with EU digital ministers in Luxembourg. Following intense lobbying, including from the US administration, companies are awaiting additional guidance and technical standards to meet their compliance requirements. Industry representatives have been advocating for a 'stop-the-clock' mechanism to postpone implementation dates if necessary guidelines are not ready. Some EU ministers expressed support for potential delays. Poland's junior digital minister, Dariusz Standerski, told POLITICO that while the industry's request was reasonable, any delay must be accompanied by a clear action plan. Speaking at the Luxembourg meeting, which he chaired under the Polish Council presidency, Standerski emphasised that merely postponing deadlines without purpose would be futile. He stressed that simplification involves more than just reducing regulations, highlighting the importance of impact assessments, implementation costs, and using technology to make compliance easier.
Analyses
US tech giants ask Commission for the simplest possible AI code: According to Cynthia Kroet from Euronews, the European Commission has postponed the release of its voluntary guidelines, with publication now expected before August. Major US technology companies, including Amazon, IBM, Google, Meta, Microsoft and OpenAI, met with Werner Stengg, an official from EU Tech Commissioner Henna Virkkunen's cabinet, to discuss the upcoming Code of Practice on General-Purpose AI. According to published meeting minutes, these companies advocated for simplifying the code to avoid redundant reporting and unnecessary administrative burden. Originally scheduled for 2 May, the final draft's release was delayed after numerous requests to extend the consultation period. During their meeting with Stengg, the companies emphasised the need for adequate implementation time following the code's publication. They also cautioned that the code should remain within the scope of the AI Act rather than extending beyond it.
Governing AI agents under the AI Act: Amin Oueslati, Senior Associate, and Robin Staes-Polet, Analyst, at The Future Society published the first comprehensive analysis of the AI Act regarding AI agents - increasingly autonomous AI systems that can directly impact real-world environments. It identifies three primary findings. Firstly, the Act regulates both general-purpose AI (GPAI) models underlying AI agents and the agent systems themselves. An agent's classification as high-risk depends on its specific use case, unless explicitly excluded by the model provider. Secondly, effective risk management of agents requires governance throughout the entire value chain, addressing the "many hands problem" of distributed accountability. Requirements must be allocated across stakeholders, considering asymmetries in resources, expertise and contextual knowledge between model providers, system providers and deployers. Thirdly, the Act governs AI agents through four main pillars: risk assessment, transparency tools, technical deployment controls, and human oversight. These encompass ten sub-measures with specific requirements along the value chain.
Removing regulatory burden for more competitiveness and resilience: DIGITALEUROPE published a policy brief arguing that reducing regulatory burdens would help Europe maintain technological competitiveness, particularly in AI, quantum computing and advanced semiconductors. Despite strong positions in certain sectors, Europe lags behind in seven out of eight strategic technology areas. The challenge is not due to lack of market size or talent - Europe boasts 440 million consumers, 23 million companies, 15% of global GDP, 17% of world patent applications, and 18% of top-tier AI talent. Rather, the inability to scale and commercialise innovation stems from market fragmentation, limited national incentives, and complex regulations. The European Commission has pledged to reduce reporting obligations by 25% for large companies and 35% for SMEs by 2029. However, DIGITALEUROPE advocates for more ambitious 50% cuts, aligning with Draghi's recommendations. The policy brief aims at three outcomes: simplifying overlapping regulations to reduce administrative burdens, improving legal clarity across Member States, and enhancing Europe's capacity to scale and compete globally.
Warning that the code for AI models is insufficient for downstream compliance: Luca Bertuzzi from MLex reported that some experts warn the EU code of practice for AI models lacks sufficient transparency measures. This means downstream users cannot comply with the AI Act’s due-diligence requirements for high-risk applications. The appliedAI Institute for Europe highlights a significant disparity between the information AI model providers must disclose under the code and what downstream users need for high-risk compliance. While the code aims to help major tech companies comply with AI Act rules for general-purpose AI models, crucial compliance aspects have been overlooked due to tight commenting deadlines and the regulation's complexity. Transparency measures received limited attention during discussions, overshadowed by debates on copyright and societal risks. The undefined nature of downstream players, potentially including anyone building high-risk systems using AI models, further complicated the issue. Without addressing this mismatch, experts suggest two likely outcomes: either regulators will lower compliance standards, or developers will avoid incorporating general-purpose AI models into high-risk applications altogether.