The EU AI Act Newsletter #62: AI Pact Signed; Code of Practice Launched
The European Commission collects over 100 AI Pact signatures and an online kick-off plenary for the general-purpose AI code of practice is scheduled for 30 September.
Welcome to the EU AI Act Newsletter, a brief biweekly newsletter by the Future of Life Institute providing you with up-to-date developments and analyses of the EU artificial intelligence law.
Legislative Process
Over a hundred companies sign AI Pact pledges: The European Commission has announced over a hundred initial signatories to the AI Pact, including multinational corporations and SMEs from various sectors. The Pact supports voluntary adherence to the AI Act principles before its official implementation and deepens engagement between the AI Office and stakeholders. Signatories commit to at least three core actions: developing an AI governance strategy to comply with the Act, mapping high-risk AI systems, and promoting AI literacy among staff. More than half of the participants have also pledged to ensure human oversight, mitigate risks and transparently label certain AI-generated content. The Pact remains open for companies to join and commit to both the core and additional pledges until the AI Act is fully applied.
Over 400 submissions for the code of practice for general-purpose AI: The Commission received nearly 430 submissions for its consultation on the upcoming Code of Practice for general-purpose AI (GPAI), as outlined in the AI Act. These submissions will inform the finalisation of the Code by April 2025, with GPAI provisions taking effect on 1 August 2025. Key areas of focus include transparency, copyright rules, risk assessment and mitigation, and internal governance. The input will guide the AI Office in implementing and enforcing the rules on GPAI, and in developing guidelines for summarising training data used in GPAI models. Additionally, almost a thousand organisations and individuals worldwide expressed interest in participating in the drawing-up of the first Code of Practice for GPAI. An online opening plenary is scheduled for 30 September.
MEPs question appointment of leader for general-purpose AI code of practice: According to Euractiv Tech Editor Eliza Gkritsi, three MEPs are questioning the European Commission's process for appointing key positions for the drafting of the general-purpose AI guidelines. On 24 September, the Commission responded to those interested in participating with limited details beyond the first plenary scheduled for 30 September; more information on the drafting process is expected. MEPs Axel Voss, Svenja Hahn and Kim van Sparrentak have submitted questions about how the Commission is appointing the chairs and vice-chairs of the working groups, particularly with respect to international expertise. They seek clarification of whether appointments will be announced by Monday's plenary and how the Commission will ensure delivery given the short timeframe.
Chair announced for the working groups: The AI Office has announced chairs and vice-chairs for four working groups developing the first General-Purpose AI Code of Practice. These experts, chosen for their diverse backgrounds in computer science, AI governance and law, will guide the process from October 2024 to April 2025. The selection criteria emphasised expertise, independence, geographical diversity and gender balance. For instance, the transparency and copyright working group is co-chaired by experts in European copyright law and AI transparency. The four working groups will address transparency, copyright, risk assessment, mitigation measures and internal risk management for general-purpose AI providers. The chairs and vice-chairs will lead discussions, synthesise input from participants, and work towards presenting a final draft by April 2025.
Analysis
Future of the pact remains uncertain: Euractiv's Eliza Gkritsi and Jacob Wulff Wold reported that the AI Pact's future is uncertain following Commissioner Thierry Breton's resignation. The Pact includes Pillar I, a peer-to-peer network for exchanging best practices, and Pillar II, a series of pledges. Major tech companies like Microsoft, Google, Amazon and OpenAI are among the 115 signatories, while others like Meta, Anthropic and Mistral are not in the initial list. Some companies were reluctant to sign due to concerns about prescriptiveness and potential interference with AI Act compliance efforts. The Pact comprises three core commitments and additional voluntary ones, with about half the signatories committing only to the core elements. The signatories must report on their implementation after twelve months, though the exact reporting requirements remain unclear.
Tech giants focus on the code of practice: Reuters’ European Technology Correspondent Martin Coulter wrote that the enforcement of rules for general-purpose AI, including how many copyright lawsuits and multi-billion dollar fines companies may face, will remain unclear until accompanying codes of practice are finalised. The EU has invited various stakeholders to help draft the code of practice, receiving an unusually high number of applications. Although not legally binding, the code will provide a compliance checklist for companies, and ignoring it could elicit legal challenges. Major tech companies and non-profit organisations have applied to participate in drafting the code. Industry representatives stress the importance of getting the code right to allow continued innovation, without making it too narrow or specific. Some stakeholders have also voiced concern that companies are going out of their way to avoid transparency.
European Parliament study on AI liability: Euractiv's tech journalist Jacob Wulff Wold reported that the European Parliament Research Service (EPRS) has recommended that liability regulations include general-purpose AI products and develop a broader legal instrument for software liability. The study suggests that the liability regime should cover general-purpose AI, and prohibited and high-risk uses of AI as defined in the AI Act. The EPRS proposes transitioning the Artificial Intelligence Liability Directive (AILD) into a broader Software Liability Instrument. The study examins the interaction between the AILD, the AI Act, and the updated Product Liability Directive. MEP Axel Voss indicated that the next steps in the JURI committee will be decided in October. The EPRS recommends extending the AILD's scope to prevent market fragmentation and enhance clarity across the EU. It also suggests applying strict liability to AI prohibited under the AI Act and considering its application to "high-impact" systems.
Looking forward to seeing what the four working groups of the AI Office will get