The EU AI Act Newsletter #84: Trump vs Global Regulation
President Trump’s AI Action Plan aims to deregulate AI, prioritising American supremacy over risks. However, this plan won’t protect US companies from global regulation.
Welcome to the EU AI Act Newsletter, a brief biweekly newsletter by the Future of Life Institute providing you with up-to-date developments and analyses of the EU artificial intelligence law.
Legislative Process
Germany seeks to reduce regulatory burdens: Pieter Haeck from POLITICO reported that Germany's new digital ministry is in discussions with Brussels about avoiding overregulation in AI based a blog post outlining Digital Minister Karsten Wildberger's first hundred days in office. The ministry stated that talks are underway with the European Commission and partner countries "to ease the burden on the economy and prevent and reduce overregulation" to enable AI innovations and developments. Germany joins other countries expressing concerns about the AI Act's impact on the bloc's AI advancement. Swedish Prime Minister Ulf Kristersson previously called for the rollout of unimplemented AI Act provisions to be paused. European Commission tech chief Henna Virkkunen indicated the Commission would decide on such a pause by late August. The EU's AI rules are being examined within a broader package to simplify the EU's tech rulebooks, expected by the end of this year. Representatives from EU countries are scheduled to meet on 18 September regarding AI, according to the EU's Artificial Intelligence Board meeting register.
Analyses
EU AI rules could constrain Trump's deregulatory agenda: Anu Bradford, Henry L. Moses Professor of International Law and Organizations at Columbia Law School, wrote an op-ed in the New York Times. She argues that President Trump wants to unleash American AI companies by rolling back regulations through his AI Action Plan, believing American supremacy outweighs risks like surveillance, disinformation or even existential threat to humanity. However, Trump cannot protect US companies from global regulation. American firms wanting to operate in international markets must follow local rules, meaning the EU's commitment to AI regulation could thwart Trump's vision of self-regulated, free-market dominance. The Brussels Effect sees EU digital regulations extend globally as companies find separate policies costly. Apple and Microsoft use GDPR as their global privacy standard, and governments often model laws on EU rules. Whilst Meta accuses the EU of overreach and seeks Trump administration support, OpenAI, Google and Microsoft are signing Europe's AI code of practice, seeing opportunities to build trust among users and streamline global policies. Europe must withstand mounting pressure to abandon regulation. AI governance and innovation are not mutually exclusive – Europe's AI lag stems from foundational weaknesses in the technological ecosystem, not digital regulations.
OpenAI urges harmonised AI regulation in letter to California Governor: OpenAI has written to Governor Gavin Newsom advocating for harmonised AI regulation rather than a patchwork of state rules. The company warns that the volume of AI-related bills moving through state legislatures this year – approximately 1,000 – could slow innovation without improving safety. The company proposes having companies adhere to federal and global safety guidelines whilst creating a national model for other states to follow. OpenAI recently committed to working with the US government's new Centre for AI Standards and Innovation (CAISI) to evaluate frontier models' national security capabilities. In the letter, OpenAI recommends California treat frontier model developers as compliant with state requirements when they have entered safety agreements with federal agencies like CAISI or signed parallel frameworks such as the EU's AI Code of Practice.
Whistleblowing and the AI Act: Santeri Koivula, an EU Fellow at the Future of Life Institute, and Karl Koch, founder of the AI Whistleblower Initiative, published an overview on the AI Act website, which explains how the EU Whistleblowing Directive (2019) relates to the AI Act and provides resources for potential whistleblowers. The Directive protects whistleblowers who report violations of EU law by requiring clear reporting channels and safeguarding against retaliation. Protections extend to various professionals including employees, contractors, suppliers, job applicants and former workers. Reports can be made internally within organisations, externally to national authorities, or publicly where urgent public interest exists or retaliation risk is present. From 2 August 2026, whistleblowing protections will explicitly cover AI Act violations, though some AI-related issues may already fall under existing protections.
AI Act faces new political reality one year on: Sarah Chander, Director of the Equinox Initiative for Racial Justice, and Caterina Rodelli, EU Policy Analyst at Access Now, wrote in Tech Policy Press that the political landscape has dramatically shifted since the AI Act was passed a year ago. They attribute this shift to transatlantic AI competition, Brussels’ deregulatory agenda, and increasing militarisation. These changes challenge the Act's foundational assumptions about balancing rights and innovation. Following the Draghi report criticising Europe's regulatory approach, the European Commission unveiled sweeping deregulation to boost competitiveness. By June 2025, Commissioner Henna Virkkunen confirmed that the AI Act's safeguards could be diluted before 2026 implementation. Predictive policing, migration risk-scoring, biometric categorisation based on race/ethnicity proxies, and emotion recognition by authorities remain possible. Chander and Rodelli advocate for strengthening bans on mass surveillance, challenging digital border systems, and accountability for publicly-funded private surveillance. With full implementation of the Act due August 2026, the next twelve months represent a critical window for civil society to resist erosion of protections.
Code of Practice enters challenging implementation phase: Lisa Soder, Senior Policy Researcher at interface, and Ema Prović, an Officer at the European AI Office of the European Commission, noted that the Code of Practice publication marks a new phase in EU regulatory ambitions. For the first time, the AI Office will gain insight into the risk management processes of the world's most advanced AI models. Documentation through the Code may generate detailed empirical records of incident logs, risk assessments and mitigation effectiveness, serving as feedback for future AI policies. However, realising these opportunities depends on the AI Office confronting formidable challenges. Firstly, building institutional capacity and technical expertise to utilise enforcement powers requires deep talent capable of scrutinising complex systems, potentially following the UK's AI Safety Institute model of hiring from major tech firms. Secondly, technical capacity must be backed by political will to resist US pressure, with promising signals from Commissioner Henna Virkkunen reportedly stating that EU digital rules are non-negotiable in trade talks. Finally, the Office must ensure that the Code remains precise and relevant by clarifying the scope of 'systemic risk' and establishing formal update mechanisms to prevent obsolescence.



Trump's deregulatory posturing reveals a fundamental misunderstanding of global tech economics.
When American AI giants inevitably choose EU compliance over fragmented regulatory strategies, his AI agenda clashes against 450 million European consumers' rights.
On the other side, it is crucial to create and develop a legislation that doesn't choke EU industry in AI innovations and developments with unnecessary burdens and give a clear and practical direction for company to apply.
This is an incredible summary. I'm curious what you think of our work:
https://www.frontiersin.org/journals/sustainable-cities/articles/10.3389/frsc.2025.1571613/full