The EU AI Act Newsletter #88: Resources to Support Implementation
To help implement the AI Act, the European Commission has launched two key resources: the AI Act Service Desk and the Single Information Platform.
Welcome to the EU AI Act Newsletter, a brief biweekly newsletter by the Future of Life Institute providing you with up-to-date developments and analyses of the EU artificial intelligence law.
Legislative Process
Resources to support implementation: The European Commission has launched two key resources to facilitate AI Act implementation: the AI Act Service Desk and Single Information Platform. These initiatives aim to support trustworthy AI development while providing necessary legal clarity across Europe. The Single Information Platform will serve as a central hub for AI Act information, offering stakeholders comprehensive guidance on implementation. The platform includes materials from Member States, FAQs and various resources. Three digital tools are featured on the platform: 1) a Compliance Checker helping stakeholders identify their legal obligations and compliance requirements; 2) an AI Act Explorer for intuitive navigation through the Act’s chapters, annexes and recitals; and 3) an online form connecting users to the AI Act Service Desk, staffed by experts working alongside the AI Office.
Italy's AI law: Dan Cooper and Laura Somaini of Covington’s Data Privacy and Cyber Security Practice wrote in Inside Privacy that Italy has adopted its national AI law, with implementation beginning 10 October 2025. The legislation complements the EU AI Act and includes both general principles and sector-specific rules for areas not covered by EU legislation. The law designates two competent authorities: the Agency for Digital Italy (AgID) as the notifying authority and the National Cybersecurity Agency (ACN) as the market surveillance authority. The government has twelve months to adopt additional measures, including aligning the national framework with the AI Act, assigning administrative powers to competent authorities, establishing rules for training AI systems, regulating the use of AI in investigative and policing activities, and updating the framework for civil and criminal penalties. Notably, the final version omits previously proposed requirements for labelling AI-generated news content, as general transparency requirements under the AI Act apply.
Dutch want to clarify AI rules instead of delaying them: According to Euractiv's Maximilian Henning, the Netherlands has issued a position paper supporting clarification of AI rules over delays, while advocating for reduced regulatory burdens in the digital rulebook. The paper outlines three key principles. Firstly, maintaining the original goals of digital legislation while focusing on clarification and coherence. Secondly, reducing compliance costs through practical tools and assistance, especially for governments and SMEs. Thirdly, streamlining governance through enhanced coordination of European regulatory boards. Specific AI Act recommendations include 1) prioritising implementation simplification over deadline extensions, 2) creating a common list of critical infrastructure under Annex III, 3) developing compliance templates while maintaining flexibility for providers, and 4) extending the derogation for Quality Management Systems to SMEs.
Analyses
Dutch chips company slams EU for overregulating AI: Based on reporting from Pieter Haeck at POLITICO, ASML’s Chief Financial Officer Roger Dassen has criticised the EU’s approach to AI regulation, arguing it drives talent and companies toward Silicon Valley. Speaking at an event in Eindhoven, he suggested that Europe’s regulatory-first approach is hampering AI development. ASML, Europe’s leading tech company by market value, has advocated for pausing parts of the AI Act’s implementation, joining 46 companies in requesting a two-year delay. The company recently became the largest shareholder in French AI firm Mistral with a €1.3 billion investment, strengthening its influence in the EU AI space. Additionally, he urged completion of the EU’s capital markets union to improve startup funding, noting that while Europe excels at creating startups, it struggles with scaling them up.
California is getting its ‘AI Act’ together: Drew Liebert and David Evan Harris, the Director and Senior Policy Advisor, respectively, of the California Initiative for Technology and Democracy, have argued in a Tech Policy Press op-ed that California has taken significant steps in AI regulation while federal policy remains stalled, with Governor Newsom signing legislation on AI transparency and child protection. This state-level action reflects necessity rather than defiance of federal authority. Key measures include the AI Transparency Act of 2025, addressing fake online content, and Senator Wiener’s SB 53, which establishes safety standards for powerful AI systems and protects whistleblowers. Additional legislation targets AI chatbots’ potential harm to minors, requires mental-health warnings on social media, and strengthens user data protection. However, these reforms fall short of the EU AI Act’s scope and advocates’ desired protections. Notable gaps remain in location privacy and algorithmic fairness, with a proposed automated systems assessment requirement postponed. While progress has been made in children’s online safety, the legislation stops short of establishing strong financial accountability for platforms.
Timeline on guidelines on AI Act interplay: According to Luca Bertuzzi from MLex, the European Commission intends to release guidelines explaining how the AI Act interacts with other digital laws from the third quarter of 2026, potentially coinciding with or following the implementation of key provisions. This timing is particularly relevant for high-risk AI systems, whose core requirements take effect on 2 August 2026. The guidelines will address the AI Act’s relationship with regulations, including Medical Devices Regulation, General Data Protection Regulation, Digital Markets Act, Digital Services Act, copyright rules and broader product safety regime. Additionally, guidance on high-risk obligations and their application along the AI value chain is expected in Q2 or Q3 2026, while clarification on incident reporting interactions with sectoral and horizontal legislation will follow later.
Commission not considering common specifications despite AI standards delays: The European Commission is not planning to develop mandatory technical requirements (common specifications) under the AI Act, despite delays in preparing technical standards needed for legal compliance, reported Luca Bertuzzi in MLex. Common specifications were intended as a fallback solution when technical standards prove inadequate or delayed. The standards for high-risk AI systems are now expected to arrive around August 2026, coinciding with the implementation deadline. At a recent closed-door meeting with the European Parliament’s AI Act implementation working group, Commission officials cited insufficient time and resources for developing common specifications. This lack of a fallback option might incentivise industry players to further delay standard-setting, particularly as such delays have already prompted calls to postpone high-risk requirements. Leading parliamentarian Brando Benifei has argued that any postponement should be contingent on the Commission’s commitment to implement common specifications if standards remain incomplete after an extension.



Useful, as always! Thanks a lot! :)