Welcome to the EU AI Act Newsletter, a brief biweekly newsletter by the Future of Life Institute providing you with up-to-date developments and analyses of the proposed EU artificial intelligence law.
Legislative Process
According to EURACTIV, the rapporteurs Brando Benifei and Dragoș Tudorache circulated new compromise amendments. On the high-risk obligations, one of the changes is that providers must immediately inform distributors and, when applicable, other actors in the value chain of any non-compliance and corrective action. When it comes to the distribution of responsibility, discussions are ongoing as to how to address the issue of allocating responsibility across the complex AI supply chain. On the administrative procedures, the goal is to ensure consistent administrative procedures across the bloc, including the procedure for assessing and monitoring conformity assessment bodies. Finally, regarding technical standards, the European People’s Party's suggestion to refer to trustworthy AI during the standardisation process has now been added.
Analyses
Euroconsumers gave an overview on their website of how the AI Act has progressed of late. They first acknowledge that due to the complexity of the regulation and its interaction with sector-specific laws, progress has been slower than they had expected, with the Parliament position still unfinished. Next, they list some pro-consumer amendments that have been put forward by MEPs, such as a set of mandatory basic principles like fairness, accountability or transparency that would apply to all AI systems; new rights for consumers, like the right to be represented by a consumer organisation; some prohibited practices being broadened and strenghtened; and third-party assessment being proposed as the conformity assessment procedure for high-risk AI systems. Finally, the article mentions that the European Commission has issued a Standards Request to European Standardization Organizations. The authors argue that while technical standards are very important in many fields, consumer groups are worried about their impact on fundamental rights and the lack of democratic process in the developing of standards.
Center for Data Innovation wrote a blog post arguing that the EU should clarify the distinction between explainability and interpretability in the AI Act. The post refers to Article 13 which states that “High-risk AI systems shall be designed and developed in such a way to ensure that their operation is sufficiently transparent to enable users to interpret the system’s output and use it appropriately.” The writer problematises this, noting there are no specifics on what it means to “interpret” AI system's output nor on technical measures a provider must take to demonstrate system compliance. The post therefore recommends that the EU clarify its terminology and ensure it does not mistakenly outlaw the most innovative systems, which may not be interpretable but still high-performing.
AlgorithmWatch wrote an explainer of the AI Act, including whom it applies to, what its scope is, and how it pursues transparency and accountability. The guide begins by highlighting some recent examples of AI damage such as the Dutch social welfare fraud, the UK grading scandal and the UK criminal conviction case, and stipulates that the AI Act could protect against these harms. The authors state that the Council of the EU is pushing for a narrow definition of AI, whereas the original draft by the Commission defined it rather broadly. The post argues that simple AI systems can also be harmful and provides the fraud incident from Netherlands as an example in which crude software was used to calculate the fraud risk of benefits recipients in a biased way.
Brookings Institution published a blog post about the Council of the EU's approach to open-source general-purpose AI (GPAI). The post claims that the Council proposes to regulate open-source GPAI; and it argues that this would undermine the development of these systems and prevent research that is critical to the public's understanding of AI. The post acknowledges that regulating such models is reasonable considering that their capabilities are rapidly increasing, that there are potential concerns related to these systems such as disinformation and deepfakes, and that these models are opaque and hard to understand. The author emphasises that very few institutions have the resources to train cutting-edge GPAI models because of the high costs involved in the development. According to the author, regulating these systems would, therefore, lead to further concentration of power over the direction of AI to big technology companies.
EURACTIV article argued that a health-centric approach to the AI Act is essential for the protection of health and fundamental rights of European citizens. The article highlights the rights to healthcare access, non-discrimination and privacy as elements that should be non-negotiable in the AI Act. The authors suggest that limited regulation of the quality of health AI will lead to distrust in public health and healthcare. They add that biases in the training data can lead to discrimination, individual injury or even death. Despite the importance of the field and these concerns, the AI Act does not do anything to specifically address health AI. The main recommendation given in the article is that important uses of AI in health and healthcare be marked as high-risk, to ensure more stringent regulatory requirements.
Centre for the Governance of AI published a report exploring whether the EU’s AI Act will produce a so-called “Brussels effect”. The report tentatively concludes that for the Brussels effect, both de facto (changes in products offered in non-EU countries) and de jure (influences regulation adopted by other jurisdictions) effects are likely for parts of the regulation. In addition, the report states that a de facto effect is particularly likely to arise in large US tech companies with AI systems that the AI Act terms “high-risk”. The authors also posit that the Brussels effect will likely be more significant than a “Washington effect” or a “Beijing effect”. Finally, the report claims that the very likelihood of a Brussels effect may make getting the AI Act right a matter of global importance.