The EU AI Act Newsletter #71: Wealth of Commission Guidelines
The European Commission publishes guidelines on AI system definition, prohibited AI practices and a living repository on AI literacy.
Welcome to the EU AI Act Newsletter, a brief biweekly newsletter by the Future of Life Institute providing you with up-to-date developments and analyses of the EU artificial intelligence law.
Legislative Process
Commission publishes guidelines on AI system definition: The European Commission has published guidelines explaining the definition of AI systems under the AI Act, aiming to help providers and other stakeholders determine whether their software qualifies as AI under the new legislation. The guidelines are non-binding and are intended to evolve over time, with planned updates on practical experience, emerging questions and use cases. As of 2 February, the first provisions of the Act have come into effect, including the AI system definition, AI literacy requirements and prohibitions on AI uses deemed to pose unacceptable risks within the EU. These draft guidelines have been approved by the Commission, but not yet formally adopted.
Commission publishes guidelines on prohibited AI: The European Commission has published guidelines for prohibited AI practices under the AI Act, focusing on practices that threaten European values and fundamental rights. The guidelines address specific concerns such as harmful manipulation, social scoring and real-time remote biometric identification. While they aim to ensure consistent application of the AI Act across the EU, these guidelines remain non-binding, with definitive interpretations reserved for the Court of Justice of the European Union. To aid stakeholders in understanding and complying with the Act, the guidelines provide legal explanations and practical examples. These too have been approved but not yet formally adopted.
Living repository on AI literacy: The European Commission has established a living repository to support AI literacy, as required by Article 4 of the AI Act. This article, which took effect on 2 February 2025, mandates that AI providers and deployers ensure sufficient AI literacy among their staff and users. In response, the EU AI Office has compiled ongoing AI literacy practices from organisations involved in the AI Pact. The repository, available as a downloadable PDF, showcases various practices categorised by their implementation status: fully implemented, partially rolled out or planned. The repository is non-exhaustive and regularly updated, and replicating these practices does not guarantee compliance with Article 4. Instead, it serves as a resource to foster learning and exchange among AI providers and deployers. The Commission does not endorse or evaluate the listed practices but supports the implementation.
Analyses
Google and Meta say the code should be weakened: According to POLITICO's Pieter Haeck, the European Union's proposed voluntary code of practice for advanced AI models is in peril after criticism from top Google and Meta executives. The code, which is meant to give substance to what was said in the AI Act, apply to companies operating general-purpose AI models, including OpenAI, Anthropic, Google, Meta and Microsoft. Google's senior public affairs official, Kent Walker, described the plan as a "step in the wrong direction" for European competitiveness. Meta's chief lobbyist Joel Kaplan similarly criticised the code, calling its requirements "unworkable and technically unfeasible." Kaplan indicated Meta would not sign the code in its current form, while Google remains undecided. The code aims to address issues such as training data disclosure and management of systemic risks. However, Walker argues it introduces requirements beyond that scope, particularly regarding copyright and third-party model testing. This criticism comes amid broader tensions over EU tech regulations, with US President Trump supporting American tech companies' claims that EU regulations amount to "tariffs." The code is expected to be finalised in April.
Civil society concerned about the code: According to Euractiv's Jacob Wulff Wold, some civil society groups are threatening to withdraw from the general-purpose AI Code of Practice drafting process, citing concerns about its inclusivity and transparency. These concerns have intensified following the AI Action Summit in France, where safety and human rights issues were overshadowed by business interests. A safety coalition, including Professor Stuart Russell and eighteen academics and think tanks, has submitted recommendations emphasising the need for mandatory external assessments and clearer risk thresholds. Meanwhile, human rights organisations criticise the relegation of rights-related risks to "additional considerations" rather than core concerns. This civil society pushback comes alongside criticism from tech giants Meta and Google, who claim the code is "unworkable." However, some activists, like Karine Caunes, dismiss this as a "fake argument", asserting that the code should indeed exceed the AI Act's requirements.
EU pushes ahead with enforcement: Barbara Moens, Henry Foy and Melissa Heikkilä at the Financial Times reported that the European Commission has issued new guidance on prohibited AI practices and which systems are considered AI. This regulatory push comes amid increasing tension with the US, as Trump has threatened retaliation against Brussels over its treatment of American tech companies, viewing EU fines as a form of taxation. While Brussels maintains its commitment to enforcing the Act, there is growing concern about US pressure. A senior EU official acknowledged these pressures but insisted the law would remain unchanged, though implementation would be made "innovation-friendly." Digital rights advocates worry this could lead to a weakening of the rules.
EU scales back tech rules to boost AI investment: Barbara Moens and Henry Foy at the Financial Times reported that the EU's digital chief, Henna Virkkunen, has stated that the bloc's move to reduce tech regulation is aimed at boosting AI investment, rather than responding to pressure from US tech companies and the Trump administration. Virkkunen emphasised the EU's commitment to supporting companies in implementing AI rules while reducing bureaucratic burdens. Despite Trump's threats of retaliation over EU fines on US tech companies and criticism from US Vice-President JD Vance about "onerous international rules", Virkkunen insisted the EU's deregulatory approach is independently motivated. The Commission has withdrawn a planned AI liability directive and will limit reporting requirements in the upcoming AI code of practice. Virkkunen affirmed that enforcement of existing platform regulations would continue, noting that theses are working to ensure a level playing field. She emphasised that while the EU remains open for business, it will continue to protect European values and maintain order in the digital world.