The EU AI Act Newsletter #90: Digital Simplification Package Imminent
The European Commission is expected to propose a year-long delay for key elements of its AI regulation in its forthcoming Digital Omnibus.
Welcome to the EU AI Act Newsletter, a brief biweekly newsletter by the Future of Life Institute providing you with up-to-date developments and analyses of the EU artificial intelligence law.
Legislative Process
Code of practice on AI-generated content launches: The European Commission has initiated work on a code of practice for marking and labelling AI-generated content, launching with a plenary meeting on 5 November 2025. The initiative responds to increasing difficulties in distinguishing between AI-generated and human-created content, aiming to reduce risks of misinformation, fraud, impersonation, and consumer deception. This voluntary code will help providers meet AI Act transparency requirements, which mandate clear marking of deepfakes and certain AI-generated content. The code will support content marking in machine-readable formats to enable detection across various media types including audio, images, video and text. The code will particularly focus on helping deployers disclose AI involvement in public interest matters. Independent experts appointed by the European AI Office will lead a seven-month, stakeholder-driven process, incorporating input from public consultation and selected stakeholders. These requirements will take effect in August 2026, complementing existing regulations for high-risk and general-purpose AI. Meet the chairs of the new code here.
Social democrats lay down red lines on revamping EU’s digital rulebook: As reported by Euractiv's Claudie Moreau and Maximilian Henning, the European Parliament’s social democrats have publicly expressed concerns about the upcoming digital omnibus proposal through a letter to Tech Commissioner Henna Virkkunen. The group warns they will oppose changes that could compromise fundamental rights. The European Commission is set to present modifications to EU digital laws this week, with leaked drafts suggesting changes to privacy and artificial intelligence rules. The social democrats strongly oppose any delay to the rules for the AI Act’s high-risk systems, which has been advocated for by some member states and industry groups. Additionally, the MEPs have raised serious concerns about proposals that would exempt providers of non-high-risk AI systems from mandatory registration in an EU database before market placement.
Greens/EFA letter on the digital omnibus: The Greens/EFA group in the European Parliament has written to EU Commissioner Henna Virkkunen expressing strong opposition to leaked digital omnibus proposals that would modify the AI Act. They argue the proposed changes would fundamentally undermine the law by yielding to non-EU industry pressure to remove important transparency requirements, weakening accountability and enforcement, and getting rid of much needed rules for AI literacy. The group warns that weakening transparency rules on energy use and sustainability standards, along with changes to fundamental rights impact assessments, could harm EU citizens while offering minimal innovation benefits. The Greens/EFA particularly criticise considerations to “stop the clock” due to standards-development delays, arguing this would remove incentives for swift standard finalisation. They suggest these delays have been deliberately orchestrated by those seeking to prevent effective enforcement of the rules for high-risk AI systems.
Analyses
EU prepares to delay high-risk rules by one year: Pieter Haeck and Gabriel Gavin from POLITICO reported that the European Commission is expected to propose a one-year delay to a key element of its AI Act, rules governing high-risk AI systems, according to two Commission officials. The delay, pushing implementation to August 2027, will be presented as part of a digital simplification package on 19 November. This shift marks a significant change in the EU’s approach, moving from its position as a regulatory front-runner to prioritising competitiveness with the US and China. The decision follows pressure from the US administration, tech companies and lobby groups, along with some EU member states citing delays in technical standards development. The proposal, which requires approval from EU countries and the European Parliament, was presented to Commission cabinet specialists. While Commission spokesperson Thomas Regnier declined to comment on leaks, he acknowledged ongoing discussions about potential implementation delays.
What’s driving the EU's AI Act shake-up? Freelance journalist Raluca Besliu wrote in Tech Policy Press that the European Commission plans to announce AI Act amendments on 19 November as part of the Digital Omnibus package. While US tech giants, through organisations like the Computer and Communications Industry Association (CCIA), lobby for widespread simplification of EU digital regulations, the Commission publicly maintains its independence in legislative decisions. Some internal EU divisions are evident, with Denmark advocating comprehensive reform, Germany opposing fundamental changes, and the Netherlands seeking a middle ground. The European tech sector is also divided, with 56 EU-based AI companies requesting simplification, while others criticise the proposed reforms as insufficient. Key proposals include potentially delaying high-risk requirement enforcement due to standards development delays and expanding the AI Office’s oversight powers to include all AI systems based on general-purpose AI models. This would centralise more authority in Brussels. Civil society organisations meanwhile warn that “simplification” could become a euphemism for deregulation, potentially weakening digital protections.
Biological AI is slipping through Europe’s AI law: Melissa Hopkins, the Health Security Policy Advisor at the Johns Hopkins Center for Health Security, published an op-ed in Tech Policy Press arguing that a critical regulatory gap in the AI Act leaves dangerous biological AI models (BAIMs) unregulated, despite their capacity to pose serious biosecurity risks. While the Act’s obligations for general-purpose AI providers took effect in August 2025, recent EU AI Office guidance appears to exclude these models from oversight. Although regulators are rightly concerned about large language models like ChatGPT potentially facilitating biological attacks by sharing sensitive information, they’ve overlooked BAIMs that could enable more severe outcomes, such as enhancing pathogen transmissibility and lethality. Unlike restricted language models, BAIMs are often released as open-source, increasing their accessibility and potential dangers. The disconnect between the Act’s recognition of biological risks as systemic threats and its implementation guidance could lead BAIM developers to conclude they’re exempt from regulation. To address this security gap, the AI Office could issue clarifying guidance to explicitly include BAIMs within the Act’s scope, rather than relying on case-by-case assessments.
Jobs
The Future of Life Institute (me!) is hiring for an EU Policy Advocate/Lead in Brussels. This person will lead FLI's research and enforcement efforts related to general-purpose AI under EU laws. The application deadline is 30 November 2025. Compensation is from €75,000 to €135,000 per year.



Insightful. The initiative to develop a code of practice for marking AI-generated content is a crucial step towards maintaining epistemic hygiene in our increasingly digital public sphere. While the focus on machine-readable formats is practical, the continous adaptation of these standards to rapidly evolving generative AI capabilities will present a significant ongoing challenge for both developers and the broader public.
The timing of this Digital Simplification Package delay is intersting given how quickly AI capabilties are advancing. The code of practice on AI generated content marking seems like a pragmatic aproach to the deepfake problem, though voluntary compliance may be insufficent. The seven month stakeholder proces for implementation shows thorough consideration at least.