Welcome to the EU AI Act Newsletter, a brief biweekly newsletter by the Future of Life Institute providing you with up-to-date developments and analyses of the proposed EU artificial intelligence law.
Legislative Process
European Parliament leading committees vote yes: On 13 February, the Internal Market and Civil Liberties Committees voted 71-8 (7 abstentions) to approve the result of negotiations with the member states on the AI Act. The Act encompasses various safeguards, including on general-purpose AI, limitations on biometric identification systems in law enforcement, bans on social scoring and AI used to manipulate or exploit user vulnerabilities, and more. The next steps involve formal adoption in a Parliament plenary session and a final Council endorsement. The Act will become fully applicable 24 months after adoption, with certain provisions taking effect sooner, such as bans on prohibited practices (6 months), codes of practice (9 months), general-purpose AI rules and governance (12 months), and obligations for high-risk systems (36 months).
European AI Office enters into force on 21 February: A European Artificial Intelligence Office will be established within the Commission, falling under the Directorate-General for Communication Networks, Content and Technology and following its annual management plan. The Office's tasks, outlined in forthcoming regulation, include developing tools for assessing the capabilities of general-purpose AI models, monitoring the implementation of rules, identifying emerging risks, investigating potential infringements, and supporting the enforcement of regulations on prohibited AI practices and high-risk systems. It will collaborate with relevant bodies under sectoral legislation, facilitate information exchange between national authorities, and maintain databases of when general-purpose AI models are integrated into high-risk AI systems. Many additional tasks are planned for the new institution. This decision comes into effect on 21 February 2024.
Analyses
Simple overview of the AI Act: Startup-association France Digitale and French consulting organisation Wavestone published an overview of the AI Act to help AI companies comply with the upcoming regulation. The Act aims to ensure that the AI systems and models marketed within the EU are used in an ethical, safe, and respectful manner, adhering to fundamental rights. Compliance is required by all providers, distributors or deployers of AI systems and models within the EU or marketed into it. The level of regulation varies based on risk level, with four levels ranging from unacceptable to minimal risk, each with corresponding compliance requirements and deadlines ranging from 6 to 36 months. Special obligations apply to generative and general purpose AI depending on whether the model is open source or not. The guidance offers three use cases to illustrate compliance considerations: 1) spam filters as low-risk, 2) artistic deep fakes as low-risk with disclosure requirements, and 3) credit scoring as high-risk requiring stringent compliance due to potential discrimination. The report encourages organisations to be prepared and anticipate compliance.
Parliament staff member's personal reflections: Kai Zenner, Head of Office and Digital Policy Adviser for MEP Axel Voss in the European Parliament, wrote a personal blog post with reflections on the AI Act. Zenner notes that the Act, introduced by the European Commission on 21 April 2021, has completed its legislative negotiations after 1004 days. Despite challenges, Zenner considers the Act more future-proof than previous digital regulations, balancing concerns and opportunities related to AI. Notable achievements in his view include international alignment, exemptions for scientific research and open-source development, and flexible high-risk obligations. However, Zenner sees as many concerns with the AI Act as benefits, among which are the Act's lack of conceptual fit for regulating AI, lack of legal certainty, and an overly complex governance system, which may lead to differing interpretations and enforcement approaches among member states. Despite criticisms, Zenner hopes that it is possible to prevent legal uncertainty worsening when the Act gradually becomes applicable over the next years by engaging in harmonised technical standards, regulatory sandboxes, guidelines, implementing and delegated acts, and the AI Office.
The wrong approach for generative AI?: S. Alex Yang, Professor of Management Science and Operations at London Business School, and Angela Huyue Zhang, Associate Professor of Law and Director of the Philip K. H. Center for Chinese Law at the University of Hong Kong, wrote an op-ed in Project Syndicate questioning whether the EU's approach for generative AI is the right one. The EU's impending rollout of the AI Act aims to establish strict regulations on AI, showcasing the EU's proactive governance approach. In contrast, the United States lacks a cohesive AI regulatory framework, resulting in a surge of litigation against leading AI firms for various issues. While the EU's transparency requirement for copyrighted materials aims to facilitate compensation negotiations, there are concerns that overly restrictive regulations could hinder AI industry growth. Nonetheless, failure to ensure fair compensation for content creators could lead to the collapse of creative sectors like journalism. The authors argue that the common-law system, with its case-by-case adjudication, may offer a more adaptable framework for AI regulation compared to the EU's broad-reaching mandate.
Next steps in the implementation: International Association of Privacy Professionals has prepared an infographic showing all the key next steps in the implementation of the AI Act. The Act will become effective 20 days after publication in the Official Journal of the EU, with implementation phased over time. Key milestones include: entry into application of the Act after 24 months, prohibitions on unacceptable risk AI already in 6 months, and obligations on providers of general-purpose AI models in 12 months. Further milestones cover post-market monitoring, high-risk AI system obligations, and penalties for non-compliance over 36 months after the law's entry into force. By the end of 2030, obligations will apply to certain AI systems used in large-scale IT systems in the areas of freedom, security and justice, such as the Schengen Information System. The Commission can issue delegated acts and guidance on various aspects, with the AI Office tasked with developing codes of practice. A Commission report on its delegated powers is due no later than nine months before the fifth anniversary of entry into force.