The EU AI Act Newsletter #93: Transparency Code of Practice First Draft
This first draft of the Code of Practice on transparency of AI-generated content was released aiming to help organisations comply with requirements for marking and labelling such content.
Welcome to the EU AI Act Newsletter, a brief biweekly newsletter by the Future of Life Institute providing you with up-to-date developments and analyses of the EU artificial intelligence law.
Legislative Process
First Code of Practice draft published: The first draft of the Code of Practice on transparency of AI-generated content addresses key considerations for providers and deployers of AI systems within the scope of Article 50(2) and (4) of the AI Act. Developed through collaborative effort involving hundreds of participants from industry, academia, civil society and Member States, the Code emerged from two Working Groups established in November 2025. The drafting process incorporated 187 written submissions from a public consultation, three workshops and a review of expert studies. The Code aims to ensure that AI-generated and manipulated content are marked in machine-readable, detectable and interoperable formats, whilst enabling people to identify deepfakes and AI-generated text published regarding matters of public interest. This foundational draft invites stakeholder feedback by 23 January 2026 to inform the second version, which will facilitate transparency of AI-generated content within the EU.
Analyses
2026 to be the year the world comes together for AI safety? A Nature editorial predicts continued progress in artificial intelligence in 2026, driven by further model development and growing global AI legislation. According to Stanford University’s Artificial Intelligence Index Report 2025, at least 30 AI laws were enacted worldwide in 2023, rising to 40 in 2024. East Asia, the Pacific, Europe and US states have led regulatory efforts, with US states alone passing 82 AI-related bills in 2024. However, low and lower-middle-income countries remain largely inactive, whilst the US federal government is undermining state-level AI regulation. The editorial argues that all nations require AI laws regardless of their position as producers or consumers. It emphasises that it’s impossible to imagine the technologies used in energy, food production, pharmaceuticals or communications being outside the ambit of safety regulation, and therefore, AI should be subject to the same. Given AI’s transformative potential and uncertainties, nations should collaborate to develop policies that not only enable development, but also incorporate guardrails. Let 2026 be the year we reach consensus on this.
EU could get a ‘common icon’ for labelling AI deepfakes: Maximilian Henning from Euractiv reported that the Code will detail how companies should disclose that photorealistic AI-generated content, such as images or videos appearing to show real persons, is actually synthetic media. It will also establish conditions for marking other AI-created content. For deepfakes, drafters from industry, academia and civil society propose an “EU common icon” – a symbol enabling people to identify at a glance whether an image depicting an apparently real event or person has been created or edited using AI, whilst providing access to further information. The icon should include a two-letter acronym referring to artificial intelligence, typically “AI” in English, with language-specific adaptations. Companies signing the Code would commit to support the development of a common icon and using an interim acronym-only version until finalisation. Signatories training their AI models would also commit to enabling downstream users to create content identifiable as synthetic.
Europe braces itself for AI Act effects: William Denselow in CGTN reported that the European Union will implement the majority of its landmark AI Act rules in 2026, marking significant governance changes for AI. Following gradual entry into force in 2024, 2026 will see comprehensive enforcement at both EU and national levels, establishing a precedent for the rapidly evolving sector. The EU emphasises trustworthy AI development, positioning European AI as a quality seal that inspires greater public confidence than alternatives from other sources. However, critics contend the Act constrains innovation as Europe competes with China and the United States. While EU officials assert that additional time is necessary for member states and companies to adapt, experts highlight the critical importance of clarity regarding liability issues and rigorous enforcement mechanisms. Without proper enforcement, we may have wonderful legislation without any effects. European officials maintain that AI technology is essential for the bloc’s resilience, competitiveness and security, and as global competition for AI dominance continues, the EU seeks to establish the rules of the game.
A guide to Fundamental Rights Impact Assessments (FRIAs): The Danish Institute for Human Rights and the European Centre for Not-for-Profit Law have created a practical guide to help deployers of high-risk AI systems conduct FRIAs. This guide incorporates international standards like the UN Guiding Principles on Business and Human Rights, structured across five phases with a downloadable FRIA template. Under the AI Act, certain deployers of high-risk AI systems must conduct a FRIA to identify and address potential negative impacts on rights enshrined in the EU Charter for Fundamental Rights. Such deployers include public entities using AI, for example, in education, employment, access to essential services and law enforcement. Beyond rights protection, well-designed FRIAs support organisations in developing responsible AI governance frameworks, building stakeholder trust, and minimising reputational and litigation costs. For public authorities, FRIAs incentivise democratisation of AI adoption decisions in critical areas.
AI Act for the automotive sector: Nicola Smith, David Naylor and Bartolomé Martín from the law firm Squire Patton Boggs published a summary exploring the intersection of the AI Act with traditional conformity assessment obligations for physical products. The authors state that many AI systems in the automotive sector will likely be treated as “high-risk” under the legislation, creating many regulatory obligations for providers, deployers, importers and distributors. Given significant penalties for non-compliance, automotive companies using or planning to use AI systems must understand their obligations and ensure effective compliance. The requirements are implemented phased over two years. Provisions on AI literacy and prohibited AI practices have applied since February 2025. Key provisions regarding high-risk AI systems take effect on August 2026, though provisions relating to classification due to product coverage under certain EU harmonised legislation apply from August 2027. The AI Act has extraterritorial effect, applying to non-EU businesses selling AI systems on the EU market or whose outputs are intended for EU use.



Great coverage as always, Risto. Thanks for keeping us updated!
Thank you for this very useful newsletter !