Welcome to the EU AI Act Newsletter, a brief biweekly newsletter by the Future of Life Institute providing you with up-to-date developments and analyses of the proposed EU artificial intelligence law.
Legislative Process
The European Parliament has uploaded a document with all three EU institutional positions side-by-side: the original European Commission's text, the European Parliament's own version, and the Council of the EU's position. We will continue to follow developments now that policymakers are back from the long policy break. We have added this alongside all other documents on the EU AI Act website.
Analyses
Academics from Imperial College London and Aarhus University, Juan Pablo Bermúdez, Rune Nyrup; Sebastian Deterding, Rafael A. Calvo wrote an op-ed in EURACTIV arguing that the draft EU AI Act aims to address concerns about harmful 'subliminal techniques' used by AI systems but lacks a clear definition of this term. The academics propose a broader definition to encompass problematic manipulation without overburdening regulators or companies. The Act is concerned with subliminal techniques that influence people's choices in ways they are not conscious of or cannot resist; it prohibits systems employing such techniques if they are likely to cause significant harm. The authors state that while this prohibition aims to protect users, it risks being ineffective because it lacks a precise definition for 'subliminal techniques.' Narrowly defining these techniques as subliminal stimulus presentation may overlook many manipulation cases. The definition proposed in the op-ed aims to broaden the scope: “Subliminal techniques aim at influencing a person’s behaviour in ways in which the person is likely to remain unaware of (1) the influence attempt, (2) how the influence works, or (3) the influence attempt’s effects on decision-making or value- and belief-formation processes.”
Ada Lovelace Institute published a position paper for the AI Act trilogues. The position paper makes the following recommendations: 1) set up an AI Office to lead on monitoring and foresight and cross-border investigations, as well as to issue guidance and analysis on emerging issues, and to instigate dialogues with foundation model developers; 2) regulate foundation models regardless of distribution channel, including mandating third party audits, disclosure of training runs, compute and capabilities evaluations, and a complaint mechanism; 3) compel risk and misuse mitigation across the AI lifecycle by requiring vetted researcher access for external scrutiny and setting up a benchmarking institute: 4) maintain a risk-based regulatory approach with clear processes for updating legislation to ensure it remains future-proofed; 5) enhance protection and representation for affected persons through pre-deployment impact assessments and remedies frameworks.
Federico Guerrini wrote in Forbes that Spain has announced the creation of Spanish Agency for the Supervision of Artificial Intelligence (AESIA), marking it as the first AI regulatory body in the European Union. This agency, led by a diverse team comprising technology experts, lawyers, and humanities scholars, has been given a broad mission to oversee the influence of AI on Spanish society. AESIA's responsibilities will entail developing risk assessment protocols, auditing algorithms and data practices, and formulating binding rules for companies involved in the creation and implementation of AI systems. Note that the agency will likely also have some role in the enforcement of the AI Act.
A coalition of open culture and AI organisations – namely GitHub, Creative Commons, EleutherAI, Hugging Face, LAION, and Open Future – published a policy paper stating that the AI Act has the potential to become a global model for AI regulation, balancing risk management and innovation promotion, but it needs to improve to avoid hindering the open ecosystem for AI. The coalition makes the following five recommendations for improvement: 1) define AI components clearly; 2) clarify that collaborative development of open source AI components and their availability in public repositories does not subject developers to the AI Act's requirements; 3) support the AI Office’s coordination and inclusive governance with the open source ecosystem; 4) ensure the R&D exception is practical and effective by permitting limited testing in real-world conditions; and 5) set proportional requirements for foundation models, recognising and distinctly treating different uses and development modalities, including open source approaches.
Norberto de Andrade, Director of AI Policy at Meta, Laura Galindo, AI Policy Manager at Meta, and Antonella Zarra, AI Policy Program Manager at Meta, published last two reports in the five report series on the AI Act as part of their Open Loop policy prototyping experiment. In the first report, the researchers tested Article 52(a) of the Act concerning transparency obligations for AI systems interacting with individuals. They exposed 469 survey participants to AI-powered systems (a chatbot and news app) with different notification styles: no notification, content-integrated notification, and notification banner. Participants’ understanding of a notification did not significantly affect their sense of control and trust in the tested AI applications. The second report summarises a policy prototyping exercise evaluating risk management and transparency in the Act. The exercise aimed to gauge the clarity and feasibility of specific requirements for AI companies. The key takeaway is that companies need additional technical guidance and standardisation to comply with the Act, along with clarification of key elements.
Veronika Rinecker, Managing Editor at Cointelegraph, wrote an overview of how German political parties are split on how to regulate AI in the AI Act. Die Linke, the left-wing party, proposes rigorous oversight by a supervisory authority for high-risk AI systems before market launch, and advocates banning biometric identification in public spaces, AI-driven election interference, and predictive policing. By contrast, the centre-right coalition known as the Union prioritises fostering an innovation-friendly environment, opposes the establishment of a large Brussels-based supervisory authority, and seeks alignment with existing data and digital regulations. The German government, while supporting the AI Act, aims to strike a balance between regulation and innovation, seeking improvements and advocating for ambitious AI testbeds during trilogue negotiations.
Once again, I appreciate this insightful summary of current news and look forward to reading summaries of upcoming developments regarding the newly established Spanish Agency for the Supervision of Artificial Intelligence.