The EU AI Act Newsletter #63: Standards, Cooperation, Risk Management
First Code of Practice plenary for general-purpose AI was on 30 September, revealing disagreements between GPAI providers and other stakeholders.
Welcome to the EU AI Act Newsletter, a brief biweekly newsletter by the Future of Life Institute providing you with up-to-date developments and analyses of the EU artificial intelligence law.
Legislative Process
Euractiv's tech journalist Jacob Wulff Wold reported that the European Commission held its first Code of Practice plenary for general-purpose AI (GPAI) on 30 September. The Commission shared a list of chairs and vice-chairs for working groups drafting the Code and welcomed nearly 1,000 participants to the virtual plenary. The drafting process will incorporate input from a multi-stakeholder consultation, workshops with GPAI providers and chairs/vice-chairs, and Code of Practice plenaries. The first draft is expected around 3 November, with the final version due in April 2025. Preliminary results from the stakeholder consultation, which received almost 430 submissions, were presented at the plenary. The consultation revealed differing opinions on data disclosure and risk assessment measures between GPAI providers and other stakeholders. The Commission received diverse input, with industry, rightsholders, civil society and academia all represented, while the plenary attendees were predominantly academics and experts in personal capacity.
Analysis
Overview of EU standardisation supporting the Act: Lawyers at Skadden Publication have written an overview of EU AI Act standardisation efforts. The European Commission has issued a standardisation request, tasking CEN and CENELEC with developing European standards by 30 April 2025. These standards aim to ensure AI systems in the EU market are safe, uphold fundamental rights and encourage innovation. CEN-CENELEC Joint Technical Committee (JTC 21) proposed a roadmap for AI standardisation, evaluated by the European Commission's Joint Research Center. The evaluation identified many gaps in existing international standards and suggested additional standards to support the AI Act. JTC 21 has adopted some AI standards, including CEN/CLC ISO/IEC TR 24027:2023 and ISO/IEC 23894:2023. CEN and CENELEC have published a work programme and dashboard detailing progress on developing additional standards. However, the completion of Harmonised Standards is expected to be delayed until late 2025, potentially leaving companies with less time to implement them before the AI Act's enforcement in August 2026.
UK enters a new era of cooperation with Europe? David Matthews, International Editor at Science|Business, wrote that the UK's new technology secretary has emphasised scientific and technological cooperation with the EU, marking a shift from the previous government's focus on post-Brexit regulatory divergence. While the previous administration unveiled plans to relax rules on research into genetically engineered crops and adopted a light-touch approach to AI regulation, the new Labour government is expected to introduce AI legislation, albeit less comprehensive than the EU's AI Act. The UK aims to collaborate closely with the US and EU on AI, continuing the focus on AI safety through its AI Safety Institute. The planned legislation will make voluntary agreements with AI companies legally binding and establish the institute as an independent body. The UK seeks to position itself between the EU's comprehensive approach and the US's executive order-based strategy. However, some experts warn this middle ground could leave the UK behind in AI regulation and that the EU's market size may compel AI companies to prioritise compliance with Brussels' rules over any British framework.
Risk management in top AI companies: SaferAI, a French non-profit, has published ratings on the AI risk management practices of leading AI companies. The report states that Anthropic, OpenAI, Google and DeepMind score moderately well, primarily due to their risk identification practices. Meta scored poorly on risk analysis and mitigation. Mistral and xAI received the lowest scores, rated as "non-existent" in most categories. SaferAI CEO Simeon Campos emphasised the urgent need for robust risk management as AI capabilities advance. Yoshua Bengio, the chair of a working group drafting a Code of Practice with the Commission’s AI Office detailing the risk management measures providers of general-purpose AI should take to comply with the AI Act, endorsed the initiative. The AI Office is actively recruiting technical specialists to enhance its capabilities in risk management. A spokesperson reported the ongoing recruitment of 25 technology specialists, mostly with computer science or engineering backgrounds, to address risks associated with generative AI and general-purpose AI.
Are telcos ready for the AI Act? Independent journalist Michelle Donegan wrote in TM Forum about how the Act will require telecom operators to increase compliance efforts and costs to meet new safety standards. While not altering overall AI strategies, operators must assess deployment risks. Customer service chatbots are lower risk but still require transparency, whereas some telco use cases may be classified as high-risk and thus incur additional regulatory obligations, particularly in critical digital infrastructure management and network operations. The regulation applies beyond Europe, impacting telcos across the world. Several major telecom operators have joined the AI Pact, a voluntary initiative to implement AI Act requirements early. These include Deutsche Telekom, KPN, Orange, Telefonica, Telenor, TIM Telecom Italia and Vodafone. Orange views the AI Pact as an opportunity to communicate directly with the European Commission and prepare for the Act. Telenor welcomes the Act, considering it a significant contribution to global AI development standards.
The AI Act has transformed the role of chief privacy officer: Field Chief Privacy Officer at Transcend, Ron De Jesus, wrote an op-ed for The Parliament Magazine arguing that the Act has significantly expanded the role of chief privacy officers (CPOs), putting new burdens on them. This shift requires CPOs to develop new technical skills and authority beyond traditional data protection. CPOs must now oversee AI systems for transparency, fairness, copyright compliance and data security. They also need to understand AI algorithms, machine learning models and automated decision-making systems, while considering associated risks and ethical challenges. Jesus states that the impact is already evident across sectors, from financial services to healthcare and e-commerce. To address these new responsibilities, he continues, enterprises must invest in education for CPOs, providing them with the necessary skills to tackle AI's technical, ethical and legal implications. Additionally, organisations need to allocate more financial and human resources to support CPOs.
I'm glad to read that telecom operators are involved in the AI Pact as a voluntary initiative to implement AI Act requirements early; this is a MUST.
Thank you for this so useful newsletter