Welcome to the EU AI Act Newsletter, a brief biweekly newsletter by the Future of Life Institute providing you with up-to-date developments and analyses of the proposed EU artificial intelligence law.
Legislative Process
According to EURACTIV, Members of the European Parliament have started to discuss more sensitive issues in the AI Act. In one of the latest meetings, biometric recognition systems were on the table as one of these difficult topics. In the original proposal for banning real-time biometric identification systems, there were exceptions when it came to identifying kidnapping victims, preventing terrorist attacks and flagging criminal suspects. After criticism from some lawmakers and civil society organisations, the reference to real-time has been removed, and the prohibition is extended to private spaces and the online sphere. Other topics that were discussed in the meeting were fundamental rights safeguards, research and development exemption, EU database, and more.
In another article, EURACTIV reports that the European Commission has managed to postpone the Council of Europe’s discussions about an AI treaty until the EU's AI Act is passed. The article states that the Commission wants to negotiate on behalf of the EU member states in order to influence the AI Act to become the international standard. Some critics are saying that the EU internal dynamics are affecting non-EU countries and undermining the independence of the Council of Europe. A spokesperson from the Commission, however, stated that a constructive and pragmatic approach to work on the two instruments in close coordination will be found.
Analyses
Future of Life Institute and University College London researchers (I am one of the authors) have published a paper proposing a qualitative definition of general purpose AI systems (GPAIS) for the EU AI Act. The authors emphasise that the definition offered by EU institutions so far lacks clarity and does not offer sufficient guidance to differentiate between fixed-purpose and general-purpose AI systems. They propose the following definition of GPAIS: An AI system that can accomplish or be adapted to accomplish a range of distinct tasks, including some for which it was not intentionally and specifically trained. This definition includes unimodal (e.g., GPT3 and BLOOM) and multimodal (e.g., Stable Diffusion and Dall-E) systems, but excludes systems like image classifiers, voice production, and recognition systems.
HolisticAI, a company offering an automated AI risk management platform, wrote a summary of the AI Act for enterprises. The key takeaways from the summary are 1) the EU plans to adopt the AI Act within the next year; 2) the AI Act is set to be the 'GDPR for AI’, with hefty penalties for non-compliance, extra-territorial scope, and mandatory requirements for businesses; 3) the AI Act will shine a spotlight on AI risk, significantly increasing awareness of the importance of responsible AI among businesses, regulators and the wider public; and 4) enterprises should act now to establish AI risk management frameworks, to minimise the legal, reputational and commercial damage.
European Digital SME Alliance published a statement calling for balanced obligations across the AI value chain to support SMEs, protect fundamental rights, and advance digital sovereignty. Start-ups and SMEs tend to access large pre-trained models through APIs and this has been commercially successful so far, the letter says. The letter also states that this success is threatened if the AI Act places obligations for such systems on downstream users who build their businesses on API access from a handful of providers. It would reduce costs of compliance for them if these systems adhered to the AI Act requirements before they are sold. Finally, the statement argues that it may be impossible for SMEs to comply with the requirements for general-purpose AI systems since they will lack both the necessary access to source data, and the ability to ensure that accuracy and robustness is built into the system.
CECIMO, the organisation representing the machine tool industry and related manufacturing technologies, published a position paper on the AI Act. In the paper, they listed a series of threats in the AI Act that could create significant hurdles and additional burdens for the machine tool manufacturing sector. Firstly, they believe that the current definition of an AI system is too wide because it includes widely used statistical and optimization methods that require little intelligence. Secondly, they recommend that the high-risk requirements apply only to AI applications in areas where a clear regulatory gap has been demonstrated. Last but not least, the authors recommend to adjusting and clarifying the balance of responsibilities between different actors, highlighting the fact that product manufacturers do not possess a detailed technical knowledge of the AI system in place – whereas the AI software providers who do.
Center for Data Innovation writes in a blog post that the current regulatory sandbox proposal in the AI Act would weigh down firms with more regulatory complexity while offering them little to compensate. The author proposes that policymakers revise the AI regulatory sandbox to encourage more regulatory experimentation, give firms equal access regardless of size, and allow foreign companies to participate. According to the post, sandboxes are an environment with specific rules to enable experimentation and mistakes with new technologies and business models, but the AI Act risks not providing that flexibility, as the regulators may not offer liability protection. In addition, the post argues that it should not matter whether participants in the sandboxes are large or small businesses; both are important for AI innovation. Finally, according to the author, to set global standards, the EU should also allow non-EU companies to test their systems in the sandboxes.