Welcome to the EU AI Act Newsletter, a brief biweekly newsletter by the Future of Life Institute providing you with up-to-date developments and analyses of the proposed EU artificial intelligence law.
Legislative Process
The Parliament and the Council have reached a provisional agreement on the AI Act. The legislation encompasses safeguards for general-purpose AI, limitations on biometric identification systems by law enforcement, bans on social scoring and manipulative AI, and the right for consumers to launch complaints. Fines for non-compliance range from 7.5 million or 1.5% of turnover to 35 million euros or 7% of global turnover. Specifically banned applications include biometric categorisation systems using sensitive characteristics, untargeted scraping of facial images, emotion recognition in workplaces and educational institutions, social scoring based on social behaviour or personal characteristics, and AI systems manipulating human behavior or exploiting vulnerabilities. Law enforcement exemptions for biometric identification systems are subject to judicial authorisation and limited to targeted searches for specific crimes or threats. High-risk AI systems face mandatory fundamental rights impact assessments, applicable also to sectors such as insurance and banking. General-purpose AI (GPAI) models and systems must adhere to transparency requirements, including technical documentation and compliance with EU copyright law; GPAI models with systemic risk are also obliged to conduct model evaluations, assess and mitigate systemic risks, conduct adversarial testing, report to the Commission on serious incidents, ensure cybersecurity and report on energy efficiency. The legislation supports innovation and SMEs through regulatory sandboxes and real-world testing. The agreement awaits formal adoption by both Parliament and Council to become EU law.
European Commission President Ursula von der Leyen has welcomed the political agreement as a historic moment. Von der Leyen says it is the first comprehensive global legal framework on AI, prioritising safety, fundamental rights, and supporting human-centric, transparent, and responsible AI development and deployment. The Act aims to anticipate new rules and has garnered interest from around 100 companies to join the voluntary AI Pact. Von der Leyen emphasises the EU's commitment to global AI governance, highlighting efforts in international forums such as the G7, OECD, Council of Europe, G20, and the UN.
Adam Satariano, Technology Correspondent at New York Times, reported that the political agreement is one of the world's first attempts to address the societal and economic impacts of this rapidly evolving technology. Satariano states that the legislation aims to set a global benchmark by balancing the benefits of AI with potential risks such as job automation, misinformation, and national security threats. He caveats that while the Act is hailed as a regulatory breakthrough, concerns linger about its effectiveness, with some provisions expected to take 12 to 24 months to be enacted. The deal, reached after three days of negotiations in Brussels, awaits approval by votes in the European Parliament and Council. The law, affecting major AI developers and businesses in education, healthcare, and banking, will be closely monitored around the world, influencing the trajectory of AI development and its economic implications. Enforcement challenges, including regulatory coordination across 27 nations and potential legal disputes, are anticipated.
Analyses
While the AI Act political agreement has been reached, there are many technical details still to be ironed out in the coming months. Stanford University researchers published a detailed proposal for foundation model regulation ahead of last week’s trilogue meetings. Among the many technical details in the article, here are some takeaways. The authors propose proportional disclosure requirements for commercial model providers or well-resourced entities, with exemptions for low-resource entities, such as students, hobbyists, and non-commercial academic groups. With regard to compute disclosure, they argue that the amount of computing power should be reported in FLOPs, the amount of hardware should be reported in relevant hardware units (e.g. number of NVIDIA A100-80GB GPUs), and standards-setting bodies should be required to establish standards for how to measure training time. Regarding environmental information, they state that measuring energy and emissions directly will likely require information about the specific data centre or who operates the hardware in which location. This should allow reasonable estimates to be calculated for energy and emissions.
Many interest groups have published statements in light of the political agreement on the AI Act. For example, civil society organisation Algorithm Watch says that EU lawmakers have introduced key safeguards in the Act to protect fundamental rights, improving on the original draft from the European Commission in 2021. They add that advocacy efforts by civil society led to mandatory fundamental rights impact assessments and public transparency duties for high-risk AI systems. However, loopholes exist, such as AI developers determining high-risk status, and exceptions for national security, law enforcement, and migration contexts. Trade association DIGITALEUROPE states that while a deal on the Act has been reached, concerns now arise about the last-minute regulation of foundation models, diverting focus from a risk-based approach. They add that the new requirements, coupled with laws like the Data Act, may divert resources to compliance and legal matters, hindering SMEs unfamiliar with product legislation. Despite concerns, there is recognition that the AI Act, if implemented effectively, can drive positive outcomes for AI adoption and innovation in Europe.
European DIGITAL SME Alliance has published a statement supporting the tiered regulation of foundation models to support SME innovation. The digital SME sector supports deregulation for start-ups and medium-sized ICT companies developing foundation models but advocates regulation for large, dominant providers. They state that these major players, often Big Tech companies, supply smaller developers with extensive base models and should be subject to third-party conformity assessments to ensure fair responsibility distribution. This approach aims to prevent downstream users, especially SMEs, from bearing excessive compliance costs, and lower market entry barriers for smaller entities. To avoid overregulation of the digital economy, primarily composed of SMEs, clear criteria defining "very large foundation models" are proposed, potentially based on computing power, the number of end users, and business users, ensuring a balance between safety and innovation.
This law should motivate Latin American citizens to participate in the discussion and implementation of policies and laws for the appropriate use of AI developed in the US, EU, etc., establish governance over data, learning models, and AI agents fed with information from their citizens and their companies, and facilitate the development of AI industries and solutions by companies in the region for the benefit of their region.