Welcome to the EU AI Act Newsletter, a brief biweekly newsletter by the Future of Life Institute providing you with up-to-date developments and analyses of the proposed EU artificial intelligence law.
Legislative Process
European Parliament voting on 13 March: The European Parliament released the draft agenda for its sessions on 11-14 March in Strasbourg, featuring a plenary vote on the AI Act on 13 March. The Act was signed by the Committee of Permanent Representatives on 2 February 2024, and endorsed by the Internal Market as well as Civil Liberties, Justice, and Home Affairs Committees on 13 February 2024. This is one of the last steps before the law enters into force.
AI Office launch: The European AI Office launched on 21 February. It will play a key role in implementing the AI Act – especially for general-purpose AI – fostering the development and use of trustworthy AI and encouraging international cooperation. The office will focus on supporting the Act and enforcing general-purpose AI rules by developing tools, methodologies and benchmarks for evaluating the capabilities and reach of general-purpose AI models, and classifying models with systemic risks. It will soon start recruiting people with policy, technical, legal and administrative expertise. To get in touch with the AI Office: CNECT-AIOFFICE@ec.europa.eu for general inquiries, CNECT-AIOFFICE-EXPERTS@ec.europa.eu for inquiries related to expert collaboration, CNECT-AIOFFICE-RECRUITMENT@ec.europa.eu for inquiries related to job opportunities.
Analyses
Questions about safeguards to asylum seekers: Journalist Eftichia Soufleri questioned in BalkanInsight whether the AI Act's protections will apply to asylum seekers. The Act aims to regulate the use of biometrics for identification, recognition and categorisation, and predictive decision-making systems. Rights groups warn that these AI systems could unfairly target migrants and those under law enforcement scrutiny, undermining their rights. While the Act does class some AI uses as high-risk, there is concern that migration-related AI systems will not be in this category. This lack of clarity means these systems might not be subject to strict transparency rules and rights assessments. Loopholes in the Act further weaken its safeguards: AI developers could have a say on whether their systems are high-risk. Despite some banned practices (e.g., emotion recognition in workplaces), exemptions for national security could permit the use of AI in border security and biometric identification.
Deepfake regulation in the AI Act: Cristina Vanberghen, international legal practitioner and academic, provided an overview in Euractiv of how the Act regulates deepfakes, and discussed whether more legislation is needed. Rather than outright banning deepfakes, the Act focuses on transparency. Article 52(3) mandates that creators disclose the artificial origin of deepfakes and techniques used, aiming to inform consumers and reduce susceptibility to manipulation. Vanberghen states that deepfakes are classified as "limited risk" AI systems, facing fewer regulations than high-risk systems, despite their potential harmful impact. As the Act lacks a clear framework for legal liability, some suggest criminalising deepfakes to deter malicious use, particularly concerning fraudulent activities, political manipulation, and child pornography. Vanberghen argues that alongside promoting digital literacy and critical thinking to combat manipulation, effective enforcement mechanisms and international cooperation are crucial due to the transnational nature of deepfake threats.
Microsoft-Mistral partnership: Kai Zenner, Head of Office and Digital Policy Adviser for MEP Axel Voss in the European Parliament, wrote a personal blog post about the Microsoft-Mistral partnership announcement and what it means for arguments against regulating foundation models under the AI Act. Zenner says that the Microsoft-Mistral partnership highlights contradictions in the Act. He points out that the French government and European Commission argued for lighter regulation of foundation models within the Act, ostensibly to protect "true independent EU champions" like Mistral and boost EU competitiveness. They warned of Mistral having to close open access or partner with foreign tech giants without regulatory relaxation. According to Zenner, Mistral's Microsoft partnership casts doubt on this rationale because it seems open access may be ending regardless, and the company no longer fits the "independent" champion profile. This suggests the EU's approach may have been misguided, despite warnings from stakeholders about the potential for this outcome. Zenner hopes that this episode helps dispel the flawed "digital sovereignty" argument, which prioritised perceived independence over realistic outcomes for the EU's AI sector.
AI Act's effects in finance: Fausto Parente, Executive Director at the European Insurance and Occupational Pensions Authority, wrote in the Eurofi Magazine that in the financial sector, AI-driven credit assessments and insurance risk evaluations face heightened scrutiny due to the Act. European standardisation bodies will refine these requirements further, with national authorities ensuring compliance. The Act also affects general-purpose AI systems, including large language models, which are being experimented with by financial institutions and expected to become mainstream soon. The new AI Office will oversee rules of these systems, while financial firms remain responsible for outsourced tools. Other AI uses in finance will navigate existing legislation, though supervisors must assess if additional guidance is needed for areas like claims management, anti-money laundering, and fraud detection, considering factors like proportionality, fairness, explainability, and accountability.
Fines in the AI Act: Legal and Regulatory Lead in Public Policy Osman Gazi Güçlütürk, Public Policy Strategist Siddhant Chatterjee, and Senior Researcher Airlie Hilliard at Holistic AI summarised the penalties for non-compliance with the Act on their blog. The Act establishes a tiered fine system for AI system operators, providers of GPAI models and Union agencies, with varying penalties based on the nature of the infringement. The most severe fines, up to €35,000,000 or 7% of global annual turnover, target violations involving prohibited systems. Non-compliance with the relevant provisions is subject to fines of up to €15,000,000 or up to 3% of annual worldwide turnover for companies. Lesser penalties, capped at €7,500,000 or 1% of global annual turnover, whichever is higher, are for providing incorrect, incomplete or misleading information. Non-compliance penalties apply to providers, deployers, importers, distributors, and notified bodies.