The EU AI Act Newsletter #58: EU AI Law Enters into Force
On 1 August the EU AI Act took effect. It aims to promote responsible AI development and deployment in the EU.
Welcome to the EU AI Act Newsletter, a brief biweekly newsletter by the Future of Life Institute providing you with up-to-date developments and analyses of the EU artificial intelligence law.
Legislative Process
AI Act enters into force: On 1 August, the EU AI Act took effect, aiming to promote responsible AI development and deployment in the EU. Initially proposed in April 2021 and agreed upon by the European Parliament and the Council in December 2023, the Act addresses potential risks to citizens' health, safety, and fundamental rights. It sets clear requirements for AI developers and deployers while minimising administrative and financial burdens on businesses. The Act introduces a uniform framework across EU countries, employing a risk-based approach:
1) Minimal risk: systems like spam filters face no obligations but may voluntarily adopt codes of conduct;
2) Specific transparency risk: systems like chatbots must inform users they are interacting with machines, and AI-generated content must be labelled;
3) High risk: for instance, systems in medicine and recruitment must meet stringent requirements, including risk mitigation and human oversight among others;
4) Unacceptable risk: for example, systems enabling "social scoring" are banned due to fundamental rights threats.
Please also check out this Implementation Timeline that we created on the EU AI Act website for a user-friendly overview of the next implementation milestones.
AI Office consultation on general-purpose AI: The European AI Office has launched a multi-stakeholder consultation from 30 July to 10 September on trustworthy general-purpose AI (GPAI) models under the AI Act. This consultation allows stakeholders to provide input on the first Code of Practice, detailing rules for GPAI model providers. It will also inform the AI Office's work on a template for summarising the content used to train these models and the accompanying guidance. The AI Office encourages participation from a wide range of stakeholders, including academia, independent experts, industry representatives, civil society organisations, rightsholders and public authorities. The consultation's questionnaire is divided into three sections:
1) Transparency and copyright-related provisions for GPAI models;
2) Risk taxonomy, assessment and mitigation for GPAI models with systemic risk and;
3) Reviewing and monitoring the Codes of Practice for GPAI models.
An initial draft of the Code of Practice will be developed based on the submissions and the AI Office will publish a summary of the aggregated consultation results.
Call for expression of interest: The AI Office has issued a call for expression of interest to help draft the first general-purpose AI (GPAI) Code of Practice. Eligible participants include AI model providers, downstream providers and other industry organisations, civil society, rightsholders, academia and other independent experts. The Code, to be drafted iteratively by April 2025, aims to ensure proper application of the AI Act rules for GPAI models and those with systemic risks. Interest can be expressed by 25 August. The drafting process, starting with a kick-off Plenary in September, will include three virtual meetings and feedback rounds. Participants will be part of the Code of Practice Plenary, which will consist of four Working Groups. Chairs and Vice-Chairs will synthesise submissions from the consultation and plenary participants. Providers of GPAI models will be invited to workshops with the Chairs and Vice-Chairs to contribute to each drafting round in addition to their Plenary participation. After publication, the AI Office and AI Board will assess its adequacy, and the Commission may approve it or provide common rules if these are deemed inadequate.
Parliament sets up a working group: According to Euractiv's Eliza Gkritsi, two of the European Parliament's committees – the Internal Market and Consumer Protection (IMCO) committee and the Civil Liberties, Justice and Home Affairs (LIBE) committee – have established a joint working group to oversee the implementation of the AI Act. MEPs have previously raised concerns about the transparency of the AI Office's staffing process and civil society's involvement in the implementation process. Details regarding the working group's approach, membership, and meeting frequency will be determined after the summer.
Analyses
Roles for AI safety institutes: Alexandre Variengien, Co-founder and Head of Research at Centre pour la Sécurité de l'IA (CeSIA), and Charles Martinet, Summer Fellow at the Centre for the Governance of AI and an Affiliate Researcher at the Oxford Martin AI Governance Initiative, wrote in an OECD AI Wonk article that in response to the rapid development of AI, the US, UK, Japan, Canada and Singapore have established AI Safety Institutes (AISIs) to assess AI systems' capabilities and risks, conduct safety research, and facilitate information exchange. The EU’s AI Office, through its AI Safety Unit, will undertake similar tasks alongside its regulatory duties. According to the authors, these institutes aim to create a coordinated global strategy for safe and beneficial AI. AISIs attempt to address the unpredictability of AI models, which pose challenges in safety and reliability. They also seek to establish evaluation standards and build consensus for better AI governance. In addition, countries vary in their AISI mandates: The UK's institute aims to minimise rapid and unexpected AI advances, while others, like the EU’s AI Office, have regulatory powers. Finally, AISIs also contribute to international coordination, assisting countries in developing regulatory frameworks, setting science-based thresholds and contributing to global governance and standards.
OpenAI's primer on the Act: OpenAI published a preliminary overview of the AI Act stating that it is a significant regulatory framework for managing AI development, deployment and use across Europe, emphasising safety to ensure trustworthy AI adoption while protecting health, safety and fundamental rights. OpenAI states that it is committed to complying with the Act not only because of its legal obligation, but also because it aligns with OpenAI's mission to develop and deploy safe AI for the benefit of humanity. The company undertakes various technical efforts, including model evaluations under its Preparedness Framework, internal and external red-teaming, post-deployment monitoring, Bug Bounty and Cybersecurity Grant Programs and contributions to authenticity standards. OpenAI plans to collaborate closely with the EU AI Office and other authorities as the Act is implemented, offering its expertise to support the Act’s objectives. The company will prepare technical documentation for downstream providers and improve the security and safety of its models in Europe and beyond.