Welcome to the EU AI Act Newsletter, a brief biweekly newsletter by the Future of Life Institute providing you with up-to-date developments and analyses of the proposed EU artificial intelligence law.
Legislative Process
POLITICO's Morning Tech reported (unfortunately, behind a paywall) that NGOs are expressing concern that proposed changes to the AI Act will weaken its scope to regulate "high risk" AI. This outrage was sparked by rumours and leaks of a compromise text on the AI Act's Article 6 on high-risk classification, drafted by a coalition of Renew, Christian Democrats, and Greens. The concern is that this proposal would give AI developers more leeway in determining whether their products could threaten human rights, and thus whether they count as "high risk". This proposal has not been finalised, however, and the proposed changes may well be rejected by the co-rapporteurs.
Analyses
Hadrien Pouget argued in Lawfare that the EU’s attempt to create technical AI standards will be a big challenge and perhaps even an impossibility. Pouget explains that these standards must fill in the gaps left by the AI Act's relatively vague essential requirements, giving them significant responsibility for the act's enforcement. The standards will need to set thresholds that AI systems must meet through tests and metrics, as well as provide tools and processes for how these systems should be developed. Companies that choose to follow these harmonised standards will be presumed to comply with the AI Act. Pouget states, however, that the field is not mature enough to know how to develop such standards adequately to protect consumers. He emphasises that this does not mean that minimum expectations for AI systems need to be changed. Rather, he concludes it is a sign that the systems themselves need changing.
The Parliament Magazine published an op-ed explaining that the AI Act includes an exemption that could allow for the use of certain high-risk technologies in migration-related procedures. The author, Laura Lamberti, writes that the AI Act is intended to regulate the use of AI in migration-related procedures, but some human rights advocates and scholars have raised concerns that the act strays away from the EU's "fundamental rights approach". More specifically, Article 83 states that this regulation will not apply to AI systems that are components of large-scale IT systems. In practice, this exempts from regulation EU migration databases like Eurodac or the upcoming ETIAS, automated risk assessments and biometric identification systems. Some EU policy analysts worry that this arbitrary exemption could leave hundreds of millions of people who are not European citizens without the safeguards that the AI Act foresees.
Ella Joyner at Deutsche Welle wrote about AI being used in the workplace, what this means for workers, and related legal developments in the EU. The AI Act specifically mentions employment, management of workers and access to self-employment as high-risk uses of AI. The law aims to provide specific obligations for makers and buyers of AI tools before they hit the market, such as a conformity assessment scrutinising the quality of data sets used to train AI systems, transparency provisions for buyers, and levels of human oversight. However, in what has been criticised as a missed opportunity, the legislation does not specifically regulate how AI can be used by employers. Although certain technologies, like the "social scoring" system associated with the Chinese government, will be banned outright under the AI Act, this does not have significant implications for the workplace.
Philipp Hacker, Andreas Engel, and Theresa List published an article on Verfassungsblog which examines the emerging regulatory landscape around large AI models and makes suggestions for a legal framework. The authors explain that general-purpose AI systems (GPAIS) are models that can perform well on a broad variety of tasks they had not been explicitly trained for. The version of the AI Act agreed by the Council stipulates that any GPAIS used for high-risk applications must comply with all of the AI Act's obligations for high-risk systems. The problem, according to Hacker, Engel and List, is that since these systems can be put to a thousand uses, at least one of them will practically always be high risk. The authors argue that it will be impossible to establish a comprehensive risk management system for all possible uses of the system. They propose that providers of these systems should be required to report on performance metrics and any harmful content issues that arose during development, and that the full obligations should apply only when a GPAIS is used for high-risk purposes.
Patrick Grady from the Center for Data Innovation wrote that European policymakers have come to realise that the novel risks posed by AI are all applications of machine learning. Grady states that the EU is reconsidering its original broad definition of AI in the AI Act and moving toward a narrower definition of machine learning. He argues that limiting the scope to machine learning is a step in the right direction, because only machine learning poses new risks to consumers, and the EU cannot afford to be left behind in the development of AI.
Patrick Grady also published an opinion piece arguing that extending the AI Act’s ban on social scoring to the private sector will hurt consumers. Grady explains that the proposed extension would prohibit public authorities from implementing "social scoring" systems, which are risk profiles of individuals based on surveillance of their behaviour. He says that the Council of the EU is now pushing to expand the AI Act's ban on social scoring to the private sector. Proponents of a ban argue that private companies could use such scores to discriminate unfairly against individuals, but Grady counters that many companies already use scores based on a multitude of data to assess creditworthiness, evaluate employees, and remove hateful content, to the benefit of users. He provides the example of streaming platform Twitch, which bans users who commit offline offenses.