Welcome to the EU AI Act Newsletter, a brief biweekly newsletter by the Future of Life Institute providing you with up-to-date developments and analyses of the proposed EU artificial intelligence law.
Legislative Process
Natasha Lomas from Tech Crunch wrote that the European Parliament is close to finalising its stance on generative AI, with the aim of reaching a final consensus on the AI Act by the end of the year. While the Council largely deferred decisions on generative AI in December, MEPs are proposing that hard requirements be added to the AI Act. The Parliament is gravitating towards a layered approach comprising three layers, to address responsibilities across the AI value chain, ensure foundational models get some guardrails, and tackle specific content issues attached to generative models. One of these layers would apply to all general purpose AI, while the second layer would address foundational models, and the third layer would target generative AI specifically. Lawmakers aim to set specific responsibilities for generative AIs, including the content they can produce and the use of copyrighted material used to train them.
According to EURACTIV's Luca Bertuzzi, the European Parliament is working on stricter rules for foundation models like ChatGPT, distinguishing them from general purpose AI. Bertuzzi reports that the key committee vote originally scheduled for 26 April will be postponed. In the new compromise text, a foundation model is defined as an “AI model that is trained on broad data at scale, is designed for generality of output, and can be adapted to a wide range of distinctive tasks,” while a general purpose AI is defined as an“AI system that can be used in and adapted to a wide range of applications for which it was not intentionally and specifically designed.” EU lawmakers want foundation model providers to comply with certain requirements such as testing and mitigating risks to health, safety, fundamental rights, the environment, democracy, and the rule of law, with the involvement of independent experts. By contrast, the provision requiring the providers of foundation models to conduct ‘know your business customer’ checks on the downstream operators has been removed.
EURACTIV's Luca Bertuzzi also reported that MEPs discussed various parts of the AI regulation during a political meeting on 13 April, with the most politically sensitive issue being prohibited practices. The German liberals proposed a provision banning the use of AI systems for monitoring, detecting, and interpreting private content in interpersonal communication services. This includes measures that could undermine end-to-end encryption. However, the conservative European People's Party opposes this provision due to its leniency towards law enforcement. In exchange for removing this provision, more progressive MEPs aim to ban emotion recognition technologies in law enforcement, border management, workplace, and educational institutions. The high-risk classification of AI models under Annex III is automatic only if they pose a significant risk of harm to health, safety, or fundamental rights. AI providers must notify competent national authorities if they consider that their systems do not pose a significant risk. Guidelines specifying the criteria for self-assessment will be developed by the European Commission six months before the regulation's implementation.
Analyses
Ezra Klein, Opinion Columnist at the New York Times, argued that there is a growing call for AI regulation, with industry insiders saying they are desperate for regulation, even if it slows them down. Klein states that competition is forcing companies to go too fast and cut corners, but without regulation no company can slow down to a safe pace. Klein discusses that policymakers around the world have put forward frameworks to govern AI, including the US government's Blueprint for an AI Bill of Rights and the EU's AI Act. Klein says that the latter aims to regulate AI systems based on how they are used, with a focus on high-risk applications, but it does not regulate the underlying model that powers all use cases. According to Klein, the EU describing the AI Act as 'future-proof' now sounds very arrogant, as new AI systems have already thrown the law's definitions into chaos. In his own view, priorities for AI regulation should include interpretability, security, evaluations and audits, liability, and humanness.
Justin Hendrix, CEO and Editor of Tech Policy Press, published a summary of a joint policy brief by an international group of AI experts, arguing that general purpose AI (GPAI) systems carry significant risks and must not be exempt under EU legislation. Experts from the AI Now Institute, the Distributed AI Research Institute, Mozilla Foundation and Hugging Face, are joined by more than 50 institutional and individual signatories. The brief makes five main points: 1) the GPAI category must apply to a spectrum of technologies, not be limited to chatbots/large language models; 2) GPAI models carry inherent risks and have caused demonstrated and wide-ranging harms, which cannot be effectively mitigated at the application layer; 3) they must be regulated throughout the product cycle, not just at the application layer, to account for the range of stakeholders involved; 4) the developers of GPAI should not be able to relinquish responsibility using a standard legal disclaimer; and 5) regulation should avoid endorsing narrow methods of evaluation and scrutiny for these models.
Inês de Matos Pinto, Legal and Digital Affairs Advisor for the Socialists and Democrats group, and Kai Zenner, Head of Office and Digital Policy Advisor for MEP Axel Voss, wrote an op-ed in EURACTIV about the importance of putting the trustworthiness of all AI systems developed in the EU at the centre of the AI Act negotiations. The authors argue that the European Parliament has been the driving force behind the inclusion of a central provision in the AI Act that outlines a set of common principles to be respected by all AI systems in the EU. These principles include human agency and oversight; technical robustness and safety; respect for privacy and data governance; transparency; diversity, non-discrimination and fairness; and social and environmental well-being. The authors say that the AI Act has already translated these principles into obligations for the providers and deployers of high-risk AI systems, while voluntary application based on harmonised standards, technical specifications, and codes of conduct is proposed for all other AI systems.