Welcome to the EU AI Act Newsletter, a brief biweekly newsletter by the Future of Life Institute providing you with up-to-date developments and analyses of the proposed EU artificial intelligence law.
Legislative Process
According to EURACTIV's Luca Bertuzzi, the European Parliament reached a political agreement on 3 March to adopt an AI definition similar to the one used by the Organisation for Economic Cooperation and Development (OECD) for the EU AI Act. This definition is important because it will determine which AI systems are subject to the EU's AI regulation. The agreed definition states that an AI system is a “machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate output such as predictions, recommendations, or decisions influencing physical or virtual environments.” A revised version may still remove the notion of a ‘machine-based’ system from the wording. The agreement to use the OECD's definition aims to ensure legal certainty, harmonisation and wide acceptance. In addition to the AI definition, the Parliament agreed on most other definitions in the AI Act, including definitions of significant risk, biometric authentication and identification.
EURACTIV's Luca Bertuzzi also reported on other amendments considered by the Parliament. On 15 February, EU lawmakers did not reach an overall political agreement but provided extensive feedback on the proposal. Notes suggest that bans on biometric categorisation and subliminal techniques will be elaborated in the preamble, specifically regarding psychological manipulation in relation to advertising. The proposed AI Office, which would enforce regulations, has been downsized due to resource concerns. An addition states that developers of high-risk AI systems must be guided and supervised so that they can be presumed to comply with the AI Act upon exiting the sandbox. A requirement for developers to verify that the data used to train models was legally obtained has been removed. Leading MEPs want the Commission to issue common specifications on requirements for high-risk systems related to protecting fundamental rights. These specifications would be repealed once included in technical standards.
Analyses
Ryan Morrison at Tech Monitor summarised an open letter by AI campaign group ForHumanity urging OpenAI to join a regulatory sandbox to test the limits and establish guardrails for its general AI models and tools like ChatGPT. ForHumanity's open letter argues that a regulatory sandbox would ensure that OpenAI's models and tools comply with the AI Act. Independent audit of ChatGPT could address risks from tools underlying artificial general intelligence. ForHumanity proposes three sandbox tools: 1) a risk management framework integrating standards and multi-stakeholder feedback; 2) an OpenAI ethics committee of experts following a public code of ethics; and 3) systematic societal impact analysis to assess the societal impact of products and developments. According to ForHumanity, these tools would provide independent auditors criteria to assure and certify high-risk AI compliance under the AI Act.
Antonella Zarra and Norberto Andrade from Meta published an op-ed in EURACTIV providing a summary of the findings from Meta's Open Loop project on the AI Act, testing its key requirements with over 50 European AI companies, SMEs and startups. Participants in the program found the following: 1) responsibilities between actors along the AI value chain should be better defined to reduce uncertainty, 2) more guidance on risk assessments and data quality requirements is needed, 3) data quality requirements should be realistic, 4) reporting should be made clear and simple, 5) distinguish different audiences for transparency requirements and ensure enough qualified workers for human oversight of AI; and 6) maximise the potential of regulatory sandboxes to foster and strengthen innovation.
Isobel Cockerell at Coda Story interviewed Petra Molnar, Associate Director of the Refugee Law Lab at York University and fellow at Harvard’s Berkman Klein Centre about what the AI Act could mean for the use of AI for border control. Molnar calls borders an AI “testing ground”, but says that the AI Act is a unique opportunity to try to create oversight, accountability and governance. She wants policymakers to consider whether the Act goes far enough to regulate or even ban the most high-risk technologies in the context of migration but does not have high hopes. She argues that predictive analytics and social media scraping are used to track migrants' digital activity, expanding surveillance beyond physical borders.
Gian Volpicelli at POLITICO wrote an article about how ChatGPT is challenging the EU's plan to regulate AI. Volpicelli writes that ChatGPT is a large language model, which has no single intended use and can be be used to write songs, computer code, policy briefs, fake news reports, and even court rulings. MEP Dragoș Tudorache adds that these AIs are like engines able to do a number of things and not yet allocated to a purpose. The Council approved its version of the draft AI Act in December, which would give the Commission the authority to establish cybersecurity, transparency and risk-management requirements for general-purpose AIs. Key MEPs recently proposed that AI systems producing complex text without human oversight should be classified as "high-risk" systems, but this proposal faced criticism because it would arguably make many activities high-risk that are not risky. Some organisations say that all general-purpose AIs should be regulated, not just text generators. According to a recent report by Corporate Europe Observatory, companies like Microsoft and Google have lobbied EU policymakers to exempt general-purpose AI systems such as ChatGPT from the obligations placed on high-risk AI systems.
Alexander Olbrechts from MedTech Europe published an op-ed in EURACTIV arguing that alignment between the AI Act and the Medical Devices Regulation and the In Vitro Diagnostic Medical Devices Regulation is critical to facilitate continued access to innovative healthcare. Olbrechts states that ensuring a level playing field and consistent rules is important to improve healthcare and avoid issues like regulatory duplication or conflict. Failure to address this could exacerbate challenges with an already burdensome regulatory system contributing to medical device shortages.
PS! The Future of Life Institute where I work is currently hiring for an AI Policy Advocate to work with us on European AI policy. The deadline is 30 March.
What must be addressed before all is that the ai models must be built with consent of owners of the data it feeds its databases. atm the models were built upon public data, copyrighted material, private records. What will you do to stop the exploitation of personal data, identity and copyright by big tech?
Hi, thanks for the great share! I have a question. Why was the requirement to prove that data to train an AI was legally obtained, drop from the Act?