The EU AI Act Newsletter #91: Whistleblower Tool Launch
The European Commission has launched a whistleblower tool for reporting suspected breaches of the AI Act directly to the EU AI Office.
Welcome to the EU AI Act Newsletter, a brief biweekly newsletter by the Future of Life Institute providing you with up-to-date developments and analyses of the EU artificial intelligence law.
Legislative Process
Commission launches Digital Omnibus proposal: The European Commission has proposed simplification measures within the Digital Omnibus to ensure effective AI Act implementation whilst supporting innovation. The package addresses artificial intelligence, cybersecurity and data regulations. One of the key provisions links rules for high-risk AI systems to the availability of necessary support tools such as standards, with implementation delayed up to 16 months until the Commission confirms that these resources are ready. This approach aims to provide companies with essential guidance before enforcement begins. Other proposed amendments include extending SME simplifications to small mid-cap companies, potentially saving €225 million annually through reduced technical documentation requirements. The package also broadens regulatory sandbox access for more innovators, establishing an EU-level sandbox by 2028 and expanding real-world testing opportunities, particularly in sectors like automotive. Additionally, the Commission proposes reinforcing the AI Office’s powers and centralising oversight of AI systems built on general-purpose models to reduce governance fragmentation.
Commission launches whistleblower tool: The European Commission has launched a whistleblower tool for the AI Act, providing a secure and confidential channel for reporting suspected breaches directly to the EU AI Office. The system allows whistleblowers to submit information in any EU official language whilst maintaining the highest levels of confidentiality through certified encryption mechanisms. The tool enables whistleblowers to report potential violations that could endanger fundamental rights, health or public trust without compromising their anonymity. Whistleblowers can receive updates and respond to additional questions from the AI Office through the secure system. This initiative supports the Act’s objectives of promoting AI innovation while addressing risks to health, safety and fundamental rights, as well as safeguarding democracy and the rule of law. By reporting information about violations, whistleblowers can contribute to the safe and transparent development of AI technologies.
Two workshops on classification of high-risk AI systems: The European Commission, in partnership with service provider PPMI (part of the Verian Group), is hosting two online workshops on 11-12 December 2025 to examine the classification of high-risk AI systems under the AI Act. These sessions will bring together experts, practitioners, policy and societal stakeholders to identify challenges related to the classification of AI systems as high-risk under the AI Act. The first workshop on 11 December (09:30-12:30 CET) will focus on high-risk AI systems in critical infrastructure, education, employment, and essential public and private services. The second workshop on 12 December (13:00-16:00 CET) will address classification of high-risk AI systems for biometrics, law enforcement, migration and justice. Both workshops aim to support the safe and responsible implementation of AI systems under the Act. Stakeholder input gathered from these discussions will inform upcoming Commission guidelines on classification of high-risk AI systems.
Analyses
Summary of the proposed Digital Omnibus on AI: Law firm Cooley has summarised the European Commission's proposed “Digital Omnibus on AI” (19 November 2025) to streamline EU AI Act implementation, ease compliance burdens and adjust compliance deadlines before full application on 2 August 2026. Key amendments include 1) extended timelines for high-risk AI requirements, delayed until harmonised standards are ready; 2) extended grace periods for legacy AI systems giving generative AI providers six months extra (until 2 February 2027) to meet transparency obligations like content watermarking; 3) broader application of the bias mitigation derogation allowing all AI providers and deployers to process sensitive data to reduce bias regardless of risk level; 4) deletion of registration requirements for non-high-risk AI systems under Annex III, 5) codes of practice lose hard-law pathway; 6) clarifications on conformity assessment procedures for embedded AI; 7) scrapped post-market monitoring templates; 8) new provisions introducing EU-level regulatory sandboxes; 9) extending SME reliefs to small mid-caps; and 10) granting the Commission’s AI Office exclusive supervisory authority over AI systems based on general-purpose AI models. The proposal now proceeds to the Council and Parliament, with trilogue negotiations expected and considerable time pressure to finalise changes before August 2026.
Some European startups welcome expected AI Act delay while others don't: Freya Pratty and Daphné Leprince-Ringuet from Sifted reported that some European startups have welcomed reports that the EU plans to delay AI Act enforcement, though some warn against appearing to capitulate to US pressure. The Commission is reportedly considering a one-year grace period for companies breaching rules for high-risk AI systems and delaying fines for transparency violations until August 2027. Alexandru Voica from AI startup Synthesia supports a targeted rethink over proceeding with current regulations, and Hugo Weber from Mirakl sees the delay as beneficial for preparation and responsible adoption. However, Feargus MacDaeid from Definely warns that the pause risks appearing as capitulation to external pressure, advocating for a strategic pause rather than a political concession. EU Startups Commissioner Ekaterina Zaharieva maintains that Europe is proceeding independently. Some industry voices, including France Digitale’s Marianne Tordeux-Bitker, express concern that delays create uncertainty after businesses have already integrated constraints for over a year.
How the AI Act was made and what the future holds: Barbara Moens and Melissa Heikkilä in the Financial Times wrote that nearly two years after European officials celebrated agreeing the world-first AI Act following 36 hours of exhausting negotiations in December 2023, Brussels is struggling with implementation complexities. The legislation was designed to establish trustworthy AI through risk-based regulation, but its complexity and rushed inclusion of AI models like ChatGPT have transformed it for some from a symbol of European leadership into a case study. The Commission has now proposed the postponement of key high-risk AI rules by at least one year, marking the first formal acknowledgement of implementation difficulties. This shift reflects changing priorities under von der Leyen’s second mandate, focusing on competitiveness amid global AI competition between the US and China. Proponents of the AI Act see the developments as a setback, arguing that this delay undermines Europe’s regulatory leadership, and dismiss criticism around the Act as industry “propaganda”. The delay remains controversial, requiring approval from Parliament and member states, as Europe grapples with balancing innovation and safety in AI governance.


