The EU AI Act Newsletter #94: Grok Nudification Scandal
Fifty-seven European Parliament lawmakers from across the political spectrum have called for a ban on AI applications that create non-consensual sexual deepfake images.
Welcome to the EU AI Act Newsletter, a brief biweekly newsletter by the Future of Life Institute providing you with up-to-date developments and analyses of the EU artificial intelligence law.
Legislative Process
Grok nudification scandal: POLITICO's Pieter Haeck reported that fifty-seven European Parliament lawmakers across the political spectrum have called for a ban on AI applications that generate non-consensual sexualised deepfake images within the EU. The call follows widespread outrage at proliferation of such images created by Grok bot on Elon Musk’s social network X. The lawmakers contend that these AI systems should be prohibited under the EU’s AI law, citing their facilitation of sexual violence against women and children. The European Commission has requested additional information from X and ordered retention of Grok-related documents until the end of the year. Although X announced it would prevent editing of images of people in revealing clothing, POLITICO verified that users could still generate such images. Lawmakers argue that the Digital Services Act alone is insufficient to address this problem, requesting the Commission to confirm that these systems are banned under the AI Act or other EU legislation. Relatedly, Laura Caroli, a former co-negotiator of the AI Act, has written a Substack post exploring how an AI Act ban on Grok’s nudification tool would work.
European Parliament digital omnibus leadership: According to Claudie Moreau and Maximilian Henning from Euractiv, Parliament’s civil liberties committee (LIBE) decided which political groups will lead two digital simplification packages: the Socialists and Democrats (S&D) will lead on data, whilst Renew secured AI. Renew’s Michael McNamara will lead the AI package and co-chairs Parliament’s working group monitoring Commission's implementation of the AI Act. McNamara emphasised that ensuring amendments genuinely simplify implementation without weakening core safeguards, stating that whilst speed matters given August 2026 high-risk AI compliance deadlines, urgency cannot replace evidence, transparency or accountability. Rapporteurs face significant pressure to conclude talks promptly as deadlines for high-risk AI rule compliance approach. The Council is also prioritising this file. S&D has not yet decided its rapporteur in relation to the data proposal; the European People’s Party similarly remains undecided, whilst Markéta Gregorová was expected to represent the Greens. The Patriots had also not decided.
AI guidelines to step in if standards miss 2027 deadline: The European Commission is preparing guidelines as a contingency measure to address potential delays in AI Act technical standards, according to documents seen by Euractiv. The standards, developed by European standardisation bodies CEN-CENELEC, are expected to detail compliance requirements for high-risk AI systems. However, numerous delivery delays have prompted concerns, with certain standards only anticipated in April 2027. These delays have prompted calls to freeze relevant AI Act rules, with governments arguing that companies require standards for compliance. The Commission responded in November by proposing to postpone high-risk AI rules to December 2027 or August 2028. The planned guidelines represent a transitional solution to forestall further delay requests should standards remain unavailable. These guidelines would differ from common specifications – an alternative backup option already foreseen in the AI Act, under which the Commission may independently adopt specifications if industry-written standards prove unavailable in time.
Proposal to simplify Medical Device Regulation: Elise Reuter, Senior Reporter at MedTech Dive, wrote that the European Commission has proposed revisions to medical and in-vitro diagnostic device regulations, praised by industry group MedTech Europe, aiming to simplify regulations by reducing administrative burdens, encouraging regulatory coordination and including provisions for rare disease treatments. The Commission anticipates annual cost savings of approximately 3.3 billion euros from the revision. The proposals address “unnecessary costs, bottlenecks, uncertainty for companies, and delays for patients.” Key changes include simpler rules for medical devices, expedited conformity assessment timelines and strengthened European Medicines Agency role in coordination across the EU. One proposal would limit the AI Act scope applying to medical devices with AI components currently potentially regulated as high-risk AI systems. The revisions would lower risk classifications for certain devices, including reusable surgical instruments and accessories for implantable devices. Laboratory-developed tests would be exempted from certain requirements if used exclusively for clinical trials. The Medical Device Regulation and In Vitro Diagnostic Regulation entered into effect in 2021 and 2022 respectively, with transition periods extended to 2027 or 2028 depending on a device’s risk class.
New boss at the EU AI Office safety unit: POLITICO Morning Tech reported that the European Commission has appointed Matthieu Delescluse as head of its AI Safety Unit, a position that remained vacant since the AI Office’s establishment last year and was temporarily filled by the AI Office boss Lucilla Sioli. According to a newly published organigram of DG Connect, the Commission’s technology department housing the AI Office, Delescluse is a DG Connect veteran with roles there since 2013. Whilst Delescluse lacks extensive AI background, his previous role involved work on the EU’s economic security strategy, specifically risk assessments conducted by the Commission’s executive on four critical technologies: semiconductors, quantum computing, biotechnology, and artificial intelligence.
Analyses
EU regulations are not ready for multi-agent AI incidents: Natàlia Fernández Ashman, Usman Anwar and Marta Bieńkiewicz published an op-ed in Tech Policy Press. The European Commission’s Article 73 guidelines for the AI Act, coming into force in August, mandate deployers and providers to report serious incidents involving high-risk AI systems such as critical infrastructure. However, the authors believe that the draft guidelines contain a significant loophole that urgently needs correction. The guidelines focus on single-agent, single-occurrence failures with simplistic one-to-one causality, yet serious risks increasingly emerge from AI system interactions producing cascading, cumulative effects. Current Article 73 wording assumes that only one system contributes to high-risk incidents, yet deployed AI systems increasingly interact with other agents and users, creating complex dependencies where assigning single culpability becomes obsolete. Examples such as algorithmic collusion in Germany’s fuel market and the 2010 Flash Crash demonstrate how AI incidents arise from multi-system interactions rather than isolated failures, with automated systems amplifying each other’s effects and triggering cascading network failures. The guidelines must explicitly recognise unexpected behaviour arising from system interactions. Additionally, draft guidelines lack structured third-party and whistleblower reporting pathways.



This is the key shift: regulators are moving from outrage to procedure—document retention, compliance hooks, and enforceable obligations.