Welcome to the EU AI Act Newsletter, a brief biweekly newsletter by the Future of Life Institute providing you with up-to-date developments and analyses of the proposed EU artificial intelligence law.
Legislative Process
Spain has assumed the rotating presidency of the EU Council of Ministers, with a focus on digital priorities and reaching a political agreement on the AI Act. According to EURACTIV's Luca Bertuzzi, in preparation for negotiations with the EU Council, Parliament, and Commission, Spain circulated a document outlining its position on critical points of the Act. These include the definition of AI, classification of high-risk applications, the list of high-risk use cases, and the impact assessment on fundamental rights. On the definition, the presidency considers options such as sticking with the Council's text, aligning with the Parliament, or awaiting the OECD's direction. On classification of high-risk applications, various options are considered, such as adopting the Parliament's version without the notification of competent authorities or refining it with binding self-assessment criteria for AI providers. The document also considered whether the AI Act is the right place to address concepts like democracy, the rule of law and sustainability, and whether the term 'deployer' should be introduced to avoid confusion. The discussions are set to inform negotiations in the trilogues scheduled for 18 July.
According to Luca Bertuzzi, the Spanish presidency has also circulated the options concerning articles on sandboxes and innovation (51-55). The Parliament has made it mandatory for national authorities to establish AI sandboxes, where companies can experiment with new AI applications under the supervision of a competent authority. However, the Council maintains this measure as voluntary, with eight countries supporting this approach, while some other countries favour the Parliament's approach. The Parliament's position also entails providing AI developers that exit a sandbox the presumption of conformity for their systems, but the Spanish presidency highlights some concerns with this process, including losing control over the compliance process and causing negative impact on competition. Other topics of priority include real-world testing conditions and whether to determine the details for regulatory sandboxes through an implementing act.
According to POLITICO's Pieter Haeck, Belgium plans to advocate for the establishment of an agency with technical expertise in algorithms within the European Union during its Council presidency next year. The country recognises the need for a structure that can objectively analyse algorithms to support AI governance and rule enforcement. Instead of creating a new agency, Belgium suggests upgrading the European Centre for Algorithmic Transparency (ECAT) in Seville, which currently provides technical expertise on AI-powered systems for content moderation under the Digital Services Act. While some EU data protection authorities have already assumed algorithm-surveillance powers, Belgium believes the enforcement of AI governance should take a different approach to avoid fragmentation. Belgium has discussed the idea with other EU countries and aims to prioritise it during its Council presidency in January 2024.
Analyses
The Future of Life Institute (where I work) published their position on the AI Act for the trilogue negotiations. This position emphasises that the Parliament's proposal rightly assigns most responsibility to builders of general purpose AI systems (GPAI) because they have the necessary resources and knowledge to comply. The Future of Life Institute recommends that the definition and regulatory treatment should reflect that GPAI include foundation models and generative AI systems to provide legal clarity. Providers of these systems should undertake "know-your-customer" checks and undergo third-party conformity assessments to mitigate potential harms. Large companies should not be allowed to evade their responsibilities, by declaring that their systems should not be deployed in a high-risk use case. An AI Office should be established to ensure effective coordination of enforcement among Member States. The office should consult with all relevant stakeholders, including civil society, when reviewing the legal regime governing GPAI.
The German AI Association also published their position on the AI Act trilogue. They argue that to avoid unsustainable regulatory burdens and disproportionate compliance costs on the European AI ecosystem, the AI Act needs to address key areas during the upcoming trilogue negotiations. The German AI Association recommends that the foundation models should be subject to transparency and data governance requirements proportionate to the risk level of the specific use case. The high-risk classification in ANNEX III should be narrowed down to critical areas and take into account the size and resources of the respective provider/deployer. The definition of AI in the AI Act should be narrowed to focus strictly on AI systems, not any advanced software. The EU should facilitate the timely development of harmonised standards in line with the rapid technological evolution of AI. The AI Act should include more provisions that support private sector initiatives, particularly European AI start-ups and SMEs.
Javier Espinoza at Financial Times reported that executives from 150 businesses, including Siemens and Heineken, have signed an open letter criticising the proposed AI Act. The letter argues that the proposed rules would jeopardise Europe’s competitiveness and technological sovereignty by imposing disproportionate compliance costs and liability risks on companies. The letter calls for regulation that confines itself to broad principles in a risk-based approach rather than heavily regulating foundation models. The executives also called for the EU to establish a regulatory body of industry experts to monitor the implementation of the law as technology advances. However, Dragoș Tudorache, an MEP who led the development of the draft law, said that the executives had failed to read the text and were reacting on the stimulus of a few aggressive lobbyists.
Ryan Browne from CNBC reported that according to the head company's cloud computing division, Thomas Kurian, Google is in talks with regulators in the EU regarding the AI Act. The company is purportedly working on tools to address a number of concerns by the EU surrounding AI, including the difficulty in distinguishing between human and AI-generated content. It has unveiled a "watermarking" solution that labels AI-generated images and is working on technologies to ensure that people can distinguish between human and AI-generated content. Browne states that this hints at how Google and other major tech companies are working on means of bringing private sector-driven oversight to AI ahead of formal regulations on the technology. Kurian emphasises that Google welcomes regulation and is working with governments to ensure the adoption of AI in the right way.