The EU AI Act Newsletter #78: Cutting Red Tape
Risto Uuk and Sten Tamkivi argue that Europe’s path to AI competitiveness lies in cutting actual bureaucratic red tape, not in removing AI safeguards.
Welcome to the EU AI Act Newsletter, a brief biweekly newsletter by the Future of Life Institute providing you with up-to-date developments and analyses of the EU artificial intelligence law.
Legislative Process
Stakeholder feedback on AI definitions and prohibited practices: the European Commission published a report prepared by the Centre for European Policy Studies (CEPS) for the EU AI Office, analysing stakeholder feedback from two public consultations on AI Act regulatory obligations. These consultations examined the definition of AI systems and prohibited AI practices, which have been applicable since 2 February 2025. The report analyses responses to 88 consultation questions across nine sections. Industry stakeholders dominated participation with 47.2% of nearly 400 replies, whilst citizen engagement remained limited at 5.74%. Respondents requested clearer definitions of technical terms like "adaptiveness" and "autonomy", warning against inadvertently regulating conventional software. The report highlights significant concerns regarding prohibited practices including emotion recognition, social scoring, and real-time biometric identification. Stakeholders called for concrete examples distinguishing what is prohibited from what remains permissible under the regulations.
AI literacy questions and answers: The European Commission has published an extensive AI literacy Q&A. Article 4 of the AI Act became applicable on 2 February 2025, requiring providers and deployers of AI systems to ensure sufficient AI literacy amongst their staff and other persons handling AI systems on their behalf. When establishing adequate literacy levels, providers and deployers must consider individuals' technical knowledge, experience, education and training, alongside the context in which AI systems will be used, including the persons targeted by such systems. The AI Office has published comprehensive guidance on AI literacy, covering key areas including definitions within Article 4 and the broader AI Act, compliance requirements with Article 4, enforcement mechanisms, the AI Office's approach to AI literacy implementation, and additional useful resources for stakeholders seeking to understand and meet these new obligations.
No lead scientific advisor on AI yet despite dozens of applications: According to Cynthia Kroet, Senior EU Policy Reporter at Euronews, the European Commission has yet to appoint a lead scientific adviser for its AI Office despite receiving "dozens of applications" and with general-purpose AI regulations taking effect on 2 August. The recruitment process continues despite the vacancy opening between November and December last year, according to a senior AI Office official speaking to Euronews. The adviser's role involves ensuring an advanced level of scientific understanding on general-purpose AI and leading the scientific approach across all AI Office work whilst maintaining scientific rigour and integrity of AI initiatives. They will particularly focus on testing and evaluating general-purpose AI models through close collaboration with the AI Office's Safety Unit. The Commission indicated it would prefer appointing a candidate from a European country to this lead scientific adviser position.
Analyses
The EU should cut actual red tape, not AI safeguards: Risto Uuk (me!) and Sten Tamkivi, a partner at Plural, a leading European early-stage venture fund, published an op-ed in Fortune arguing that Europe’s path to AI competitiveness lies in cutting actual bureaucratic red tape, not in removing AI safeguards. Independent experts collaborate with the EU AI Office to develop practical guidelines for general-purpose AI companies, transforming regulatory principles into unified practices across member states to prevent fragmented national rules. This work primarily affects global giants like Meta and Google, who are among roughly ten companies worldwide building the most capable models, rather than European startups. Uuk and Tamkivi claim that these well-resourced corporations can easily manage requirements such as third-party evaluations where independent entities assess under certain conditions whether an AI model is safe. Such assessments ensure models like Meta's Llama or Google's Gemini do not pose systemic risks including cyberattacks, bio-risks or loss of control. Claims that such tests are too burdensome or unworkable are ludicrous given these companies' unprecedented resources. These safeguards mirror established practices in pharmaceuticals, aviation and finance, where independent verification builds public trust and investor confidence. For European economic growth, the focus should be reducing traditional business bureaucracy rather than AI safety regulations.
The value of the Code of Practice safety and security framework: Risto Uuk (me again!) published a newsletter on LinkedIn highlighting some of the main arguments in favour of the Code. Firstly, the Code of Practice translates the AI Act's vague essential requirements into actionable guidance for general-purpose AI providers, addressing obligations like model evaluation and systemic risk mitigation. Secondly, the Code compiles the best safety practices from top AI companies in one place. Recent research analysed how the Code compares with existing practices from leading AI companies including OpenAI, Anthropic, and Google DeepMind, aiming to establish centralised industry best practices. Thirdly, the European Commission encourages Code adoption, offering benefits like increased trust and streamlined enforcement for signatories, whilst non-signatories face additional scrutiny and information requests. Fourthly, the Code primarily targets approximately eleven global providers with models exceeding 10^25 FLOPs – highly resourceful companies like OpenAI, which recently secured $40 billion funding. Finally, the Code is being developed through an exemplary democratic process which began in October 2024, as the Code involved around a thousand stakeholders across three drafts, led by thirteen independent chairs, contrasting favourably with typical corporate-dominated technical standard development.
ABBA legend warns against diluted rights in the EU AI code: According to Cynthia Kroet from Euronews, ABBA member Björn Ulvaeus warned MEPs in Brussels recently about "proposals driven by Big Tech" that weaken creative rights under the AI Act. Speaking to the European Parliament's Committee on Culture and Education, Ulvaeus, who serves as president of the International Confederation of Societies of Authors and Composers (CISAC), expressed concerns about the voluntary Code of Practice on General Purpose AI. "The argument that AI can only be achieved if copyright is weakened is false and dangerous. AI should not be built on theft," Ulvaeus stated, calling such an approach "an historic abandonment of principles." He criticised the Code for ignoring creative sector calls for transparency, urging the EU to lead on AI regulation rather than backslide, ensuring implementation stays true to the Act's original objective.
Feedback
We are looking for 2-minute feedback on the AI Act website, so that we can build the resources most helpful to you. We have produced many of the resources on our website in direct response to user feedback, sometimes within just one to two weeks. This site exists with the aim of providing helpful, objective information about developments related to the EU AI Act; it is used by more than 150,000 users every month. Thank you in advance for your time and effort. Hopefully, we can pay it back in tailored, high-quality information addressing your needs.
Feedback on the EU AI Act site? ✅ Done!
My extra input: It would be great if you launched an Instagram account with dynamic, bite-sized content featuring fresh updates on the AI Act. — LinkedIn and newsletters are fine, but they’re a bit too formal. If you really want to engage younger people, you need to be where they are.