The EU AI Act Newsletter #89: AI Standards Acceleration Updates
CEN and CENELEC have announced exceptional measures to speed up the development of European standards supporting the AI Act.
Welcome to the EU AI Act Newsletter, a brief biweekly newsletter by the Future of Life Institute providing you with up-to-date developments and analyses of the EU artificial intelligence law.
Legislative Process
German digital ministry asks: According to Luca Bertuzzi from MLex, Germany’s digital ministry has proposed significant changes to the AI Act through a draft position paper circulated by the Federal Ministry for Digital Transformation, which requires Social Democratic Party approval to become official stance. The paper advocates for more lenient implementation and reduced requirements. It suggests one-year extensions for high-risk requirements and sectoral obligations deadlines, broader research exemptions encompassing real-world testing, and harmonisation of terminology across EU regulations. The ministry also seeks streamlined documentation requirements, clearer definitions of model providers and provider-user transitions, and a review of high-risk categories, particularly in insurance. Additional recommendations include simplifying technical documentation and AI literacy requirements, removing fundamental rights impact assessments for public bodies, and extending quality management system exemptions to SMEs and start-ups.
Dutch authority probe into early violations of the rules for general-purpose AI models: Luca Bertuzzi also reported that OpenAI, xAI and Mistral may face early enforcement actions under the AI Act following a Dutch Data Protection Authority investigation revealing their chatbots provided misleading voting advice ahead of parliamentary elections. The authority’s vice-chair, Monique Verdier, warned that this misguided advice threatens democratic integrity. While the Act’s general-purpose AI rules took effect on 2 August, they currently only apply to new models. Models released by these three companies after 2 August may face obligations if they meet the “systemic risk” threshold of 10²⁵ FLOPs computing power, though this information hasn’t been publicly disclosed. The Dutch authority has informally shared its findings with the EU AI Office. Developers could face private litigation, and chatbots influencing elections might be classified as “high-risk systems” under the Act, subjecting them to stricter regulations.
Standards bodies’ decision to accelerate the development of standards: CEN and CENELEC have announced exceptional measures to expedite the development of European standards supporting the AI Act, following decisions made at their joint Technical Boards meeting in October 2025. The accelerated process includes allowing direct publication of drafts following positive votes, bypassing separate formal votes, and establishing a small expert drafting group to finalise six delayed drafts. These measures aim to ensure standards availability by Q4 2026 while maintaining standardisation principles of inclusiveness, transparency and consensus. The new drafting group will facilitate technical consolidation while preserving the participatory nature of the process. The organisations emphasise these are temporary measures to meet urgent demands for AI standards supporting the AI Act’s implementation.
Analyses
Fast-tracking of standards-writing controversy: According to Euractiv's Maximilian Henning, leaders of EU AI standards drafting have protested against recent fast-tracking measures, warning of “serious unintended consequences” in a letter to EU authorities and standards bodies CEN-CENELEC. The unprecedented acceleration allows smaller expert groups to finalise delayed AI standards, particularly those concerning high-risk AI systems’ trustworthiness, datasets and biases. The streamlined process includes clear deadlines and fewer procedural steps. The drafters argue this undermines standardisation’s core principle of consensus and threaten to resign if the decision isn’t reversed. While CEN-CENELEC describes these measures as exceptional, unique and temporary, these assurances haven’t satisfied the protesters. The controversy relates to wider debates about potentially pausing the AI Act’s rules for high-risk systems. Proponents of the delay cite insufficient adjustment time for companies before the August 2026 implementation, given missing standards. Many countries supporting delays are themselves behind in establishing enforcement structures. The Commission is currently reviewing its standards-setting procedures.
Delaying AI Act enforcement for European SMEs: Sebastiano Toffaletti, DIGITAL SME Secretary General, wrote in an op-ed that with the AI Act’s enforcement for “high-risk” solutions beginning in ten months, essential implementation tools remain incomplete, particularly affecting SMEs’ ability to comply. While the Act’s aim to ensure trustworthy, human-centred technology is commendable, the current state of preparedness risks hampering European AI innovation. Of 45 required technical standards, only 15 have been published, with nearly half projected to remain incomplete by the August 2026 deadline. Regulatory sandboxes, meant to provide safe testing environments, are largely unavailable - only Spain has one ready, while ten member states haven’t proposed legislation for their creation. Additionally, only eight of 27 member states have designated required market surveillance authorities. This situation forces European SMEs to either risk costly compliance guesswork or delay innovation. Toffaletti advocates delaying enforcement for SMEs until at least six months after all standards, sandboxes and guidance are operational, arguing this would strengthen rather than weaken the Act’s effectiveness.
Europe’s advanced AI strategy depends on a scientific panel: Kai Zenner, the Head of Office and Digital Policy Adviser for MEP Axel Voss, published an op-ed in Tech Policy Press stating that the European Commission has concluded its call for experts to join the AI Scientific Panel, a 60-member body advising the European AI Office on implementing the AI Act from 2026. The panel will focus on general-purpose AI, providing guidance on systemic risks, model classification and emerging threats. While hundreds have applied, member states have imposed restrictive national quotas requiring at least one expert per country and 80% of members from EU/EFTA states. This requirement presents challenges, particularly for smaller nations, which could potentially compromise the panel’s expertise quality. Zenner argues for prioritising world-renowned AI researchers with proven track records in monitoring frontier developments over national quotas. Zenner emphasises the need for independent third-party evaluators and specialists rather than generalists, suggesting that industry connections shouldn’t automatically disqualify candidates if balanced with independent experts. Additionally, he advocates for including younger voices actively involved in frontier AI development rather than retired professionals.
Jobs
The Future of Life Institute (me!) is hiring for a UK Policy Advocate/Lead in London. While this role won’t directly focus on the EU AI Act, it will require advising the UK government on its forthcoming AI bill. This will involve understanding the bill’s focus in comparison to the EU’s work. The compensation for this role is £95,000-£160,000 per year, with an application deadline of 19 November 2025.



Also if we can keep tagging Gen AI content everywhere , won’t it make it easier for regulation ?
Hey, can anyone help me in filing a patent in the EU ?