The EU AI Act Newsletter #53: The Law Is Finally Adopted
The Council of the EU approved the AI Act and the law will now enter into force very soon.
Welcome to the EU AI Act Newsletter, a brief biweekly newsletter by the Future of Life Institute providing you with up-to-date developments and analyses of the proposed EU artificial intelligence law.
Legislative Process
AI Act is finally adopted: The Council of the EU approved the AI Act on 21 May. This legislation, the first of its kind in the world, adopts a 'risk-based' approach, imposing stricter regulations on AI systems with higher potential to harm society. The Act aims to standardise AI rules, potentially setting a global standard for AI regulation. Its primary goal is to promote the development and adoption of safe and trustworthy AI systems within the EU's single market, ensuring they respect fundamental rights while fostering investment and innovation. However, the Act exempts certain areas such as military and defence applications, and research purposes. Mathieu Michel, Belgian Secretary of State for digitisation, administrative simplification, privacy protection and building regulation, highlights the Act's emphasis on trust, transparency and accountability while ensuring this fast-changing technology can flourish and boost European innovation. Once signed by the presidents of the European Parliament and of the Council, it will be published in the EU's Official Journal, taking effect twenty days later, with a two-year implementation period (with some exceptions).
Analyses
Areas of convergence and divergence between EU and US: Benjamin Cedric Larsen, AI & Machine Learning Project Lead at the World Economic Forum, and Sabrina Küspert, Policy Officer at The European AI Office, European Commission, published an overview on The Brookings Institution's website, highlighting both commonalities and disparities between the EU and US approaches to regulating general-purpose AI. While the US executive order focuses on guidelines for federal agencies to shape industry practices and reporting requirements under the Defense Production Act, the AI Act directly regulates general-purpose AI models within the EU, with legally binding rules. The executive order can be modified and revoked, while the AI Act is a lasting governance structure. The EU's threshold for regulated general-purpose AI models is lower than that of the US, potentially encompassing a wider range of models. The US approach focuses on AI's dual-use risks and potentials, but the EU's Act takes a broader view of systemic risks and includes discrimination at scale, major accidents and negative consequences to human rights. Both frameworks are aligned on the need for documentation, model evaluation and cybersecurity requirements. The AI Act's global influence could parallel that of the GDPR due to the importance of the European market. Conversely, the US order primarily sets domestic policy with indirect global influence. Both entities, along with other G7 countries, have committed to creating a non-binding AI code of conduct, signalling increased collaboration on AI policy.
Open-source AI exceptions: Lawyers on the Orrick’s AI team Julia Apostle, Sarah Schaedler, Shaya Afshar and Daniel Healow wrote an overview of the exceptions in the AI Act for AI systems released under free and open-source licenses, which excludes those that are high-risk or interact directly with individuals. General-purpose AI models can qualify for a limited open-source exception if providers enable access, usage, modification, and distribution of the model's parameters. However, they must not present systemic risks. Benefits of the exception include exemption from transparency obligations, but providers must share detailed training content summaries and comply with EU copyright law. The authors say that when contemplating a strategy for open-source AI, developers, providers and deployers should consider the types of AI technologies being used, the pros and cons of licensing under open-source licenses, and the necessary safeguards when utilising open-source AI technologies.
Spotlight on biometrics: David J. Oberly, Of Counsel in Baker Donelson's Law Practice, wrote that the AI Act imposes strict regulations on biometric systems, bans certain use cases and extends its reach extraterritorially to companies beyond the EU. Oberly states that biometric technologies such as identification, verification, and categorisation are directly regulated, while there are prohibitions on systems related to sensitive personal attributes, emotion recognition in workplaces and schools, AI systems that expand facial recognition databases, and real-time identification in public spaces. High-risk and transparency classifications impose additional obligations, including visible notices for data subjects and marking outputs for synthetic content. He advises companies involved in biometric systems to proactively work toward compliance with the Act, with several recommendations in the post.
AI Office
Vision for the AI Office: Philipp Hacker, Sebastian Hallensleben and Kai Zenner published an op-ed in Euractiv arguing that the implementation of the world's first comprehensive AI legislation requires robust leadership and an innovative structure. Hacker, Hallensleben and Zenner state that the AI Office, facing numerous challenges including tight deadlines and limited budgets, must be designed with a clear and strategic structure to fulfil its mission effectively. They propose a structure comprised of five specialised units: Trust and Safety, Innovation Excellence, International Cooperation, Research & Foresight, and Technical Support. In addition, attracting and retaining top talent, including from outside EU institutions, is crucial, necessitating appealing work environments and arrangements. Leadership should combine institutional knowledge with AI expertise, supplemented by external advisors. Operational values should prioritise agility, minimal bureaucracy, and autonomy, akin to successful start-ups. They also highlight the importance of securing cutting-edge hardware and software through innovative procurement methods. Finally, the authors call for transparency and accessibility to foster regular interactions with various stakeholders including EU citizens, civil society, academia and SMEs.
Robust governance ideas: Academics Claudio Novelli, Philipp Hacker, Jessica Morley, Jarle Trondal and Luciano Floridi suggest several enhancements to the governance of the AI Act. Firstly, they propose clearer guidelines for the structure of the AI Office and expert selection process. Secondly, they advocate merging the Forum and Panel into a single body to strengthen the deliberation process and avoid duplications. Thirdly, they emphasise the need for coordination among EU entities through an AI Coordination Hub to manage conflicting interests. Fourthly, they highlight the necessity for the AI Board to address national decisions to ensure consistent regulations and prevent misuse. Lastly, they recommend establishing a unit within the AI Office for learning and refining AI practices through collaboration with the competence centres of Member States.
Very interesting analysis of the approved law, thank you so much!
In the similar flavour of the development of the GDPR policies for capture, retention and use of personalised data, is this EU AI convergence the first real attempt at developing AI governance from a systemic approach?
If yes, then should collaboration with other countries, states and territories be extended so that the outcome is a truly global approach to the much needed governance model. For instance, those developing AI and related assets should all be accountable by the one governance structure, rather than the EU structure for Europe and the US structure for those wishing to ignore it and run their own governance structure?