The EU AI Act Newsletter #68: New Year Kickoff
Korea has passed the "Basic Act on the Development of Artificial Intelligence and the Establishment of Trust", becoming the second nation globally after the EU to enact comprehensive AI legislation.
Welcome to the EU AI Act Newsletter, a brief biweekly newsletter by the Future of Life Institute providing you with up-to-date developments and analyses of the EU artificial intelligence law.
Legislative Process
Second draft of the Code of Practice published: The second draft of the General-Purpose AI Code of Practice has been published following stakeholder consultation involving approximately 1000 participants, including EU Member States representatives and international observers. The draft was shaped by Working Group meetings held in November 2024, when participants provided feedback through verbal discussions and interactive polls. The process included 354 written submissions on the first draft and input from AI model providers through dedicated workshops. The Code serves as guidance for general-purpose AI model providers to demonstrate compliance with the AI Act throughout the lifecycles of their models. It is particularly relevant for models released after 2 August 2025, when new regulations take effect. Further discussions are scheduled for January 2025, with Working Group meetings covering technical risk mitigation, transparency, risk assessment, governance and copyright rules. Additional workshops with AI model providers and Member State representatives will follow. The third draft is expected in mid-February 2025.
MEPs lobby for longer deadlines and more details: Euractiv's tech journalist Jacob Wulff Wold reported that the two leading MEPs, Brando Benifei and Michael McNamara, are calling for changes to the consultation process for the Code of Practice on general-purpose AI. In a letter to Executive Vice-President Henna Virkkunen, they request longer consultation periods, suggesting three weeks instead of two for written feedback, and advance sharing of documents with the parliament group. The MEPs argue that current consultation periods are too brief for stakeholders to provide meaningful input. With multiple AI Act processes running simultaneously, smaller organisations and civil society groups face particular challenges in contributing effectively, given their limited resources compared to industry lobbyists. While the MEPs acknowledge positive progress in the code's first version, they emphasise the need for clearer and more measurable provisions. Some civil society organisations advocate for more specific requirements, while others, including Pierre Larouche from the Centre for Regulation in Europe (CERRE), warn that excessive detail might reduce the process to mere form-filling without meaningful engagement.
Analyses
Korea passes second comprehensive AI law in the world: Staff writer Kim Min-kuk reported in CHOSUNBIZ that Korea has passed the "Basic Act on the Development of Artificial Intelligence and the Establishment of Trust", becoming the second jurisdiction in the world after the EU to enact comprehensive AI legislation. The Act, which takes effect in January 2026, was passed following four years of discussions and the merger of nineteen bills. Under the Act, the Minister of Science and ICT can establish a national AI plan to enhance competitiveness every three years. It provides a legal foundation for the National AI Committee and AI Safety Research Institute, focusing on protecting citizens from AI-related risks. The legislation supports AI industry development through R&D funding, standardisation and data policies. It promotes AI clusters, data centres and industry convergence while supporting SMEs and startups. The Act addresses the technical limitations and misuse of AI by defining high-impact and generative AI as regulatory subjects, mandating transparency and safety obligations. Minister Yoo Sang-im highlighted the significance of the Act in positioning Korea as an AI G3 powerhouse, noting its role in reducing uncertainties for corporations and encouraging public and private investment.
EU losing narrative battle over AI Act: Science|Business technology reporter Martin Greenacre wrote that senior figures have warned that the EU is losing control of the narrative around its AI Act, with European companies believing false claims that the legislation stifles innovation. Carme Artigas, the UN AI advisory board co-chair, suggests this narrative is deliberately promoted by the US to make European startups cheaper to acquire. Lucilla Sioli, head of the European Commission's AI Office, emphasises that the Act is simpler than perceived, mainly requiring self-assessment. She notes that while only 8% of EU companies currently use AI, the goal is to increase this to 75%. The Act categorises AI systems by risk level, with most systems facing no obligations. Holger Hoos acknowledges that it is in the interests of US tech companies to promote the narrative, but also believes there is a kernel of truth to it due to possible negative impact on certain parts of the AI ecosystem. Clark Parsons, CEO of the European Startup Network, suggests that it is limited access to capital and early-adopting customers, rather than regulation, that drives European AI companies to relocate to San Francisco.
Lack of enforcement capacity puts EU at risk: MEP Axel Voss warns that the European AI Office is severely understaffed to implement upcoming EU AI rules. Currently, the office has approximately 85 staff members, with only 30 specifically working on AI Act implementation. This contrasts sharply with the UK AI Safety Institute, which has over 150 staff focused solely on AI safety, despite lacking formal legislation. The AI Office, although credited with setting global standards through the General-Purpose AI Code of Practice, is experiencing delays in delivering key components of the regulation. These include a template for AI training content reporting and consultations on unacceptable AI uses. Voss argues that the AI Office Units A2 and A3 need to expand to over 200 staff by the end of next year to match the Commission's commitment to the Digital Services Act. He emphasises that substantial expertise is required across systemic risks, market developments and legal aspects to govern advanced AI models effectively.
Implementing the AI Act in Belgium: Wannes Ooms and Thomas Gils of the Knowledge Centre Data & Society published a policy brief analysing the implementation requirements of the AI Act in Belgium, focusing on scope, exemptions and the designation of regulatory authorities. With regards to notifying authorities, the brief recommends that existing Belgian authorities currently overseeing product harmonisation should maintain their roles under the AI Act. This approach would ensure continuity and simplify the interpretation of requirements for high-risk AI systems. For market surveillance, the brief advocates centralisation under a single authority to ensure legal certainty and consistent interpretation, while maintaining cooperation with product-specific authorities. It suggests the Belgian federal data protection authority oversee biometrics, migration and administration of justice. For law enforcement AI systems, either the data protection authority or COC could be designated as competent authority. The brief recommends the FPS Economy as the central market surveillance authority for remaining high-risk AI systems, supported by sector-specific authorities.
This is exactly the brief we needed to kick off the first-ish week of the year! Thank you for the post. I look forward to tuning in again as more news comes out.
Thank you for this newsletter once again for another year! I completely agree with Tristan—Thank you for sharing all this updated info, it's so necessary. I truly appreciate the insights and look forward to staying tuned as more updates come through.