Welcome to the EU AI Act Newsletter, a brief biweekly newsletter by the Future of Life Institute providing you with up-to-date developments and analyses of the proposed EU artificial intelligence law.
Legislative Process
The Czech presidency of the Council has prepared the final version of the compromise text. The presidency aims to submit it to Coreper as part of preparation for the planned General Approach during the upcoming TTE Telecom Council scheduled for 18 November 2022. One of the changes in this version is a modification to indicate that when implementing the transparency obligation under Article 52, the characteristic of individuals belonging to vulnerable groups needs to be taken into account to ensure protection from discrimination. Other small changes relate to clarifying when exactly obligations should start applying to providers of general purpose AI systems and that all such obligations are covered by the provisions on penalties. Annex III includes a modification to indicate that only life and health insurance is covered by a respective high-risk AI use case, not the whole insurance package.
Analyses
Open Loop, a global experimental governance program supported by Meta (previously Facebook), published their first report on the AI Act. For this report, the authors tested some articles of the AI Act draft proposal with companies around the world to assess how understandable, feasible, and effective they are. They found that the taxonomy of AI actors was mostly clear for the participants, though they did point out that in reality the roles of users and providers are not as distinct as those proposed due to the dynamic nature of AI development, deployment and monitoring. Another finding is that most participants think they would still perform a risk assessment even if their AI system was not considered high risk. They also seemed to understand what was meant by "known and foreseeable risk", and were generally convinced that they could carry out this risk assessment.
Mozilla published a policy brief making the case that questions around the definition of general-purpose AI systems (GPAIS), their treatment under the AI Act, and whether to include them explicitly in the Act in the first place remain unresolved. The first main point the authors make in the brief is that the original Commission draft fails to account for the special nature of GPAIS even though they are already at the core of some successful AI services. Secondly, they emphasise that to prevent harm effectively and avoid overburdening individual actors along the AI supply chain, responsibility should be shared between original providers and those adapting GPAIS to high-risk uses. Finally, they argue that imposing the full burden of compliance on those publishing open source GPAIS could stifle important safety and security research as well as downstream innovation.
The law firm William Fry put out a short analysis of the latest Council compromise text. Firstly, the post makes a point that the AI Act legislative process may take longer to complete than the anticipated Q4 2023 or Q1 2024. Secondly, on general-purpose AI systems (GPAIS), the authors state that significant legal issues facing producers of GPAIS will be ensuring that their instructions of use sufficiently disclaim any prohibited and high-risk use activities and that users of GPAIS will not go outside what is permitted in the usage instructions. Thirdly, the authors argue that the phrase "elements of autonomy" in the AI definition does not give legal certainty, as it is unclear whether the concept of autonomy used here is quantitative or qualitative.
Science Business wrote an article about a policy paper (authored by me) stating that the EU has failed to develop the general purpose AI systems (GPAIS) that are increasingly the backbones for products like chatbots or automated emails. Without these, the article argues, the EU cannot enforce its vision of ethical AI. The author emphasises that EU countries will struggle to develop GPAIS due to the sheer amount of money, data and computation required to build them, and will likely rely on systems that are developed elsewhere. The article also mentions that there have been multiple warnings that the AI Act fails to cover GPAIS, instead leaving liability for problems like bias and toxicity on the shoulders of EU companies that develop specific applications on top.
Knowledge Centre Data & Society also evaluated one of the latest Council compromise texts. The first thing they highlight is that the AI definition will continue to be debated because the addition of ‘with elements of autonomy’ introduces a lot of ambiguity: is there a quantitative or a qualitative threshold, and how exactly is that assessed? Another point they raise is that the exclusion of purely personal, non-professional uses of AI systems would mean that, under the AI Act, non-professional users of AI systems would not have to accord with the instructions of use. On general-purpose AI systems (GPAIS), the authors argue that from a democratic point of view, the delegation of this issue to the European Commission should be reconsidered. GPAIS are probably the AI systems with the most difficult to assess societal risks and challenges, meriting a thorough democratic debate.
POLITICO published an article discussing some human rights activists’ concerns that companies, many of which are outside Europe, will play a key role in deciding the details of the AI Act legislation. One activist worried that since the rules do not target normal kinds of products, but those with the potential to violate rights in the case of biometric surveillance, discrimination or access to employment and education, that makes standards insufficient. It is argued that technical standards and protocols to address fundamental rights concerns need to be distinguished as standards cannot address fundamental rights. Furthermore, larger and richer corporations have more influence in standards bodies, despite the serious conflict of interest. Standards bodies, however, do not see themselves as vessels of corporate influence but rather as forums to take all viewpoints into consideration.