The EU AI Act Newsletter #70: First Measures Take Effect
The Act, which entered its first phase on 2 February just before the Paris AI Summit, now prohibits specific AI applications.
Welcome to the EU AI Act Newsletter, a brief biweekly newsletter by the Future of Life Institute providing you with up-to-date developments and analyses of the EU artificial intelligence law.
Legislative Process
The first measures of the AI Act take effect: Le Monde journalist Alexandre Piquard reported that the first phase of the Act began on 2 February, just before the Paris AI Summit. Initially, the Act prohibits specific AI applications, including social rating systems, predictive policing AI for individual profiling, and workplace or school emotion recognition. Also banned are the exploitation of people's vulnerabilities, manipulation or subliminal techniques. Real-time facial recognition in public spaces and biometric categorisation for identifying personal characteristics are also banned, with some law enforcement exemptions. The regulation's implementation will be gradual. From 1 August, general-purpose AI models must provide transparency regarding technical documentation and training data, with major models requiring security audits. After that, regulations will extend to high-risk AI applications in infrastructure, education, employment, banking, justice, among other sectors. Enforcement will be handled by the new AI Office and national authorities. Violations can result in fines: up to 7% of global sales for prohibited practices and 3% for other infringements.
EU threshold for AI models with systemic risk needs reviewing: MLex journalists Luca Bertuzzi and Jean Comte reported that France is pushing for a revision of the criteria that determine when a general-purpose AI model poses a "systemic risk" under the AI Act, before the regulations take effect on 2 August. According to a document seen by MLex, the French government is requesting the European Commission to update the threshold, reference criteria and indicators used for this classification. The AI Act imposes requirements on providers of general-purpose AI models, including major tech companies like OpenAI, Google and Meta. Models trained with computing power above a certain threshold are considered to present systemic risks to society and thus face particularly stringent regulations. The French authorities believe an urgent review of this threshold is necessary, wanting updates to be completed before the Act's obligations for model providers take effect on 2 August.
Analysis
Responsibilities along the value chain for high-risk systems: A KU Leuven Law blogpost discussed how the AI Act regulates high-risk AI systems by allocating responsibilities across the value chain, including providers, authorised representatives, importers, distributors and deployers. While this linear approach promotes accountability and compliance, it has significant limitations. The Act assigns clear responsibilities based on each actor's level of control, with providers bearing the primary responsibility for ensuring compliance before market entry. For non-EU providers, an authorised EU representative must be appointed. However, the Act's linear framework may not adequately address complex relationships between multiple parties in different contexts, potentially creating accountability gaps. There are concerns about responsibility-shifting, particularly from providers to deployers, especially when large tech companies limit their liability through contracts. The Act's handling of serious incidents is problematic, requiring deployers to inform providers before authorities. Given the opacity of AI systems, establishing causal links between design and harm can be challenging, and the provisions do not adequately connect harmed individuals with responsible parties.
Code of practice at a crucial phase: Laura Caroli, Senior Fellow of the Wadhwani AI Center at the Center for Strategic and International Studies, stated that while many outside Europe consider the EU AI Act settled, crucial developments are still unfolding through the drafting of rules for general-purpose AI models, particularly those posing systemic risks. The regulation broadly outlines provisions, leaving specifics to a voluntary code of conduct. The second draft maintains the structure of the first draft but combines the Safety and Security Report with the Safety and Security Framework to enhance compliance tracking and transparency for the AI Office. The taxonomy section has been integrated into commitments for general-purpose AI providers with systemic risk. The new draft appears more industry-friendly, replacing "signatories will" with "signatories commit to" and emphasising flexibility in implementation. Notable changes include clarification that incident reporting does not admit wrongdoing and that providers commit to using external assessors in the pre-deployment phase only in certain instances. However, the draft also introduces more complex obligations, including stricter timeframes. For instance, providers must reassess their Safety and Security Framework every six months rather than annually.
Status and challenges in the standardisation of high-risk requirements of the AI Act: Bitkom emphasises in a position paper that standardisation is crucial for the successful implementation of the Act, highlighting several critical challenges and solutions. The slow progress in standardisation is creating legal uncertainty, increasing compliance costs and potentially hampering AI innovation and product launches. While the European Commission's involvement in the process is necessary, it must be balanced to maintain efficiency and independence in technical discussions. The paper advocates prioritising general standards that are flexible enough to accommodate industry-specific needs and existing standards, which could also accelerate the standardisation process. European standards should align with international ISO/IEC standards to facilitate market access and prevent barriers to trade. Consistency across all AI Act standards and their interaction with other horizontal legal acts is essential, with requirements and definitions designed to be complementary wherever possible.
What lessons can Australia learn from the EU AI Act? Wendy Yang of the Law Society interviewed Jose-Miguel Bello Villarino, Senior Research Fellow at the University of Sydney Law School. Villarino argues that advanced liberal democracies broadly agree on the need for some degree of risk-based AI regulation. He suggests that Australia should learn from the EU AI Act, particularly regarding the prohibition of certain AI applications, such as those exploiting human vulnerabilities. A key advantage of the Act, according to Villarino, is its clarity and predictability. Developers know what to expect when creating systems for the EU market of 500 million people. For Australia's AI regulation, Villarino emphasises the importance of interoperability. Given Australia's small market size for both development and deployment, the country will need to import AI systems and ensure locally developed systems are compatible with international regulations.
Great newsletter as usual!
Related to the Bitkom position paper, I recently read this academic paper that very convincingly takes the opposite stance. You might find it interesting!
https://hal.science/hal-04785208v1/file/Gornet_verticals.pdf