The EU AI Act Newsletter #56: General-Purpose AI Rules
Members of the European Parliament sent a letter to the AI Office asking for greater inclusion of civil society and other stakeholders in the drafting of codes of practice for general-purpose AI.
Welcome to the EU AI Act Newsletter, a brief biweekly newsletter by the Future of Life Institute providing you with up-to-date developments and analyses of the EU artificial intelligence law.
Legislative Process
MEPs have questions about the codes of practice process: POLITICO's Morning Tech published a letter (unfortunately behind a paywall) by MEPs Brando Benifei, Svenja Hahn, Katerina Konečná, Sergey Lagodinsky, Kim Van Sparrentak, Axel Voss and Kosma Złotowski urging the EU's AI Office to include civil society in the drafting of rules for powerful AI models. In the letter, they express concern that the Commission plans initially to involve only AI model providers, potentially allowing them to define these practices themselves. The MEPs argue this approach could undermine the development of a robust, globally influential code of practice for general-purpose AI models. They stress the importance of an inclusive process involving diverse voices from companies, civil society, academia, and other stakeholders. The lawmakers call on the AI Office to adherence to the AI Act's requirement to include these groups alongside AI model providers. The MEPs warn that allowing market-dominant companies to influence the process in isolation risks a narrow perspective, contradicting EU goals for AI development.
Latest on the codes: According to Euractiv's Tech Editor Eliza Gkritsi, the European Commission plans to allow AI model providers to draft codes of practice for compliance with the AI Act, with civil society organisations in a consultation role. This approach has raised concerns about industry self-regulation because the codes will demonstrate compliance for general-purpose AI models until harmonised standards are established. The Commission may grant these codes EU-wide validity through an implementing act. Some civil society members worry this could lead to Big Tech writing their own rules. The AI Act's language on stakeholder participation in drafting these codes was ambiguous. The Commission states that a forthcoming call for expressions of interest will outline how various stakeholders, including civil society, will be involved. However, details remain unclear. An external firm will be hired to manage the drafting process, including stakeholder engagement and weekly working group meetings. The AI Office will oversee the process but primarily focus on approving the final codes.
How to define high-risk products relative to sectoral rules? According to Euractiv's Jacob Wulff Wold, the European Commission is expected to classify AI-based cybersecurity and emergency services components in internet-connected devices as high-risk under the AI Act. This interpretation, revealed in a document on the interplay between the Radio Equipment Directive (RED) and the AI Act, sets a precedent for classifying other AI products. The classification criteria include having a safety component covered by existing legislation and requiring third-party assessment for compliance. Even when self-assessment is allowed under RED, AI-based cybersecurity components are still considered high-risk. This approach may extend to other sectors covered by harmonised legislation relevant to the AI Act, such as medical devices, aviation, and heavy machinery.
Analyses
Summary of the codes of practice: Jimmy Farrell, EU AI policy lead at Pour Demain, published an introduction to the codes of practice for general-purpose AI (GPAI) model providers on the EU AI Act website. Article 56 of the Act establishes codes of practice as a temporary compliance mechanism for GPAI model providers. These codes bridge the gap between the point at which provider obligations take effect (12 months) and when formal standards are adopted (3+ years). While voluntary, adhering to these codes presumes conformity with Articles 53 and 55 obligations. Providers not following the codes must prove compliance by other means. The AI Office can invite GPAI providers and national authorities to draft the codes, with support from civil society, industry and academia. The drafting process may involve multi-stakeholder working groups divided by topic. The codes must be drafted within nine months of the Act's enforcement to allow time for Commission approval. They are expected to form the basis for future GPAI standards and should faithfully reflect the Act's intentions, including health, safety, and fundamental rights concerns. The structure may resemble previous frameworks such as the disinformation codes, featuring high-level commitments with detailed sub-measures and performance indicators.
Literature review for the codes: SaferAI wrote a literature review to inform the EU codes of practice on general-purpose AI (GPAI) models with systemic risks. The text emphasises the importance of organisational structures, such as internal audit functions, AI ethics boards and the 3 Lines of Defense model, in an organisation's governance. For risk identification, the review recommends combining traditional techniques such as scenario analysis and the fishbone method with specialised methods like red-teaming. The authors suggest using risk taxonomies proposed by Weidinger et al. and Hendrycks et al. for comprehensive risk identification. They add that risk analysis should integrate methods from risk management and AI evaluation literature, with the BRACE (Benchmark and Red team AI Capability Evaluation) framework highlighted as a balanced approach. The authors state that mitigation strategies should include deployment and containment measures, safety by design, safety engineering and organisational controls. The text also emphasises the need for codes of practice to be interoperable with other frameworks, including ISO/IEC 23894:2023, G7 Hiroshima Process Codes of Conduct, Seoul Frontier AI Safety Commitments and the NIST AI Risk Management Framework.
Summary of the enforcement setup: Freshfields Bruckhaus Deringer's lawyers wrote a summary of the main institutions involved in the enforcement of the AI Act. The Act is enforced at EU level by the European Commission, with the AI Office having exclusive powers over general-purpose AI (GPAI) and EU Member States handling other aspects through appointed market surveillance and by notifying authorities. The AI Office oversees GPAI compliance, coordinates cross-border investigations, and can overrule national decisions in certain cases. National market surveillance authorities have extensive powers for surveillance, investigation, and enforcement of non-GPAI provisions. The European Artificial Intelligence Board, comprising representatives from each Member State, advises the Commission and Member States on consistent application of the Act. A scientific panel of independent experts supports enforcement activities and can issue 'qualified alerts' about systemic risks in GPAI models. The European Data Protection Supervisor is involved in enforcing the Act for EU institutions and bodies.