The EU AI Act Newsletter #50: AI Office Needs a Leader
Svenja Hahn from Renew Europe, Kim van Sparrentak from the Greens, and Axel Voss from the European People’s Party ask how the leader for the Office will be selected.
Welcome to the EU AI Act Newsletter, a brief biweekly newsletter by the Future of Life Institute providing you with up-to-date developments and analyses of the proposed EU artificial intelligence law.
Legislative Process
MEPs have questions about the AI Office: Euractiv's Tech Editor Eliza Gkritsi reported that three members of the European Parliament have queried the European Commission regarding the recruitment process for the AI Office, which will be responsible for enforcing the EU’s significant technology law. Svenja Hahn from Renew Europe, Kim van Sparrentak from the Greens, and Axel Voss from the European People’s Party, all of whom acted as shadow rapporteurs for the AI Act, seek clarity about the selection of the office leader and attracting talent in a fiercely competitive global market. Despite the law's passage in March, the leader for the office remains unannounced, with aspects of enforcement already planned for six months after enactment. Dragos Tudorache, rapporteur for the legislation, also voiced concerns about the transparency of the process. While some MEPs initially proposed the AI Office as an independent entity, it was integrated into the Commission in a compromise Tudorache made to ensure regulatory coherence and help to foster the digital single market.
Governments need to appoint AI regulators: Senior EU Policy Reporter Cynthia Kroet at Euronews wrote that the European Commission is urging national governments to appoint AI regulators as part of the implementation of the AI Act set to begin by the end of this year. Roberto Viola, Director General at the Commission's digital unit, announced that letters will be sent to member states requesting these appointments, with a deadline of 12 months for setting them up. These regulators will form the AI Board, responsible for harmonising the approach to AI regulation across the EU. The Commission aims for the Act to fully enter into force by June, with bans on prohibited practices taking effect by the end of the year. Recruitment for positions at the AI Office has started, with plans to hire around a hundred staff members. Some of the staff start work this autumn. However, the selection process for the head of the AI Office will only start once the Act is fully approved.
Analyses
What do economists think? The Forum for the Kent A. Clark Center for Global Markets conducted a poll posing two questions about the AI Act to economics experts. On the Act's potential impact on European tech firms, opinions varied: 4% agreed it would disadvantage them, and 2% strongly agreed. Conversely, 2% strongly disagreed and 16% disagreed. One economist highlighted concerns that regulatory complexity would drive entrepreneurs away from the EU. Another economist mentioned the potential for a Brussels effect, suggesting that "tech firms may well adopt EU regulation globally." Regarding the claim that the Act could enhance research and innovation, responses were mixed: 24% agreed, while 6% disagreed and 2% strongly disagreed. One expert noted, "Providing a clear set of rules removes regulatory uncertainty, which should promote development of AI systems." Another said, “There is little question that, in areas defined as harmful, including e.g. cars and large LLMs, the regulatory and compliance burden is larger and hence research and innovation will decrease.”
A conversation with Dragoș Tudorache: Senior Reporter Melissa Heikkilä of MIT Technology Review interviewed Dragoș Tudorache, the politician behind the AI Act. Tudorache played a pivotal role in the Act's development as one of its two lead negotiators in the European Parliament. Tudorache's interest in AI sparked in 2015 after reading Nick Bostrom's book Superintelligence, prompting him to advocate for AI regulation. Upon his election to the Parliament in 2019, he seized the opportunity to work on AI regulation, upon President Ursula von der Leyen's commitment to it. Tudorache steered the AI Act through intense negotiations, overcoming everything from tech company lobbying to EU economies flip-flopping. Although criticised by civil society for insufficient human rights protection and by industry for excess restriction, the final compromise is, Tudorache points out, simply textbook politics.
Failure to protect civic space and the rule of law: Liberties, European Center for Not-For-Profit Law (ECNL) and European Civic Forum (ECF) analysed the AI Act's shortcomings from the perspective of protections for fundamental rights and civic space. The organisations argue that the rapid finalisation of the Act resulted in significant gaps and legal uncertainties, leaving much to be determined by the European Commission through delegated acts, guidelines and codes of conduct. The ECNL, Liberties, and ECF emphasise the need for civil society, particularly from marginalised groups, to be heard in the implementation and enforcement phases. The key flaws in the Act identified by the organisations include: 1) gaps and loopholes for prohibitions, 2) self-assessment by companies, 3) weak standards for fundamental rights impact assessments, 4) the risk of AI in national security becoming a rights-free zone, and 5) civic participation in implementation not guaranteed.
The AI definition still lacks clarity: Tervel Bobev, Researcher at the Centre for IT & IP Law, published a blog post arguing that the definition of AI systems remains vague, raising concerns about effective implementation. The Act defines an AI system as a “machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.” The Act further details that "autonomy" implies a system's operation with minimal human intervention, "adaptiveness" refers to its self-learning capabilities during use, and "inference" involves deriving models or algorithms from data to achieve specific outputs. According to the Bobev, despite these clarifications, the Act's language remains ambiguous, including about the extent of required autonomy and its distinction from mere automation. This uncertainty suggests that the Act's definition will likely need further interpretation through case law and regulatory guidance.
Did you find this edition helpful? Please share it with a friend or colleague who might benefit from it. I'm always open to hearing your thoughts on the AI Act as well as feedback for the newsletter. Thank you for reading!
The selection of candidates from the recruitment process for the AI Office seems to be unfolding like a suspense plot ;-)