The EU AI Act Newsletter #48: EU Needs Oppenheimers
The European Parliament passes the first ever comprehensive AI law in the world. The MEP who co-led the law's drafting calls for serious money to be given to the AI Office.
Welcome to the EU AI Act Newsletter, a brief biweekly newsletter by the Future of Life Institute providing you with up-to-date developments and analyses of the proposed EU artificial intelligence law.
Legislative Process
European Parliament passes the AI Act: On 13 March, the Parliament approved the AI Act, establishing one of the world's first binding pieces of legislation on AI. This regulation aims at ensuring safety, protecting fundamental rights, and promoting innovation. The Act was endorsed by MEPs with 523 votes in favour, 46 against and 49 abstentions. It will enter into force twenty days after publication, with various provisions becoming applicable over 24-36 months. The co-rapporteurs lauded the Act as a starting point for a new model of governance centred around European values and technology, paving the way for addressing AI's societal implications. The regulation is still subject to a final lawyer-linguist check before it is formally endorsed by the Council.
Analyses
AI Office needs top talent: Gian Volpicelli at POLITICO wrote an article about the need for talent in the new European AI Office. The AI Act requires hiring top AI experts to implement the legislation effectively, according to Dragoș Tudorache, the MEP who co-led drafting the law. Tudorache stated that the European Commission needs to recruit "Oppenheimers" – brilliant minds akin to the physicist who assembled the team to build the US atomic bomb – to staff the new central AI Office that will oversee the Act. He urged the hiring of tech workers, academics and futurists who deeply understand AI from the inside, rather than typical European bureaucrats. Attracting such talent is difficult, as the EU faces stiff global competition for AI specialists from tech giants, new US and UK government AI initiatives, and others. The AI Office's initial budget of €46.5 million is, for instance, dwarfed by the UK's £100 million AI safety institute. Tudorache called for the next EU budget to give the Office "serious money". Potential candidates to lead the office include current EU AI director Lucilla Sioli, AI policy head Kilian Gross, and experienced EU official Werner Stengg. Tudorache himself is also rumoured to be interested in this role.
Focus shifts to oversight: Cynthia Kroet at Euronews wrote that as the AI Act comes into effect, attention shifts to member states appointing national authorities responsible for overseeing compliance within the next 12 months. Spain was the first to establish an Agency for the Supervision of Artificial Intelligence (AESIA) in 2023. The Netherlands' data protection authority created an algorithms department last year with 12 employees, expected to grow to 20 this year. Ireland's Department of Enterprise, Trade and Employment will lead the development of the national implementation plan, while Luxembourg is consulting stakeholders to coordinate an efficient regulatory approach. Meanwhile, the European Commission began recruiting for policy and technical roles at the AI Office to harmonise enforcement across EU countries. In addition, trade groups have warned about potential implementation challenges and compliance burdens on businesses.
Fundamental rights impact assessment: DLA Piper lawyers Heidi Waem, Jeanne Dauzier and Muhammed Demircan wrote a summary of the fundamental rights impact assessment (FRIA) requirements under the the AI Act. Waem, Dauzier and Demircan state that the Act requires deployers and operators of high-risk AI systems to conduct a FRIA to mitigate potential harms to individuals' fundamental rights beyond technical compliance. These FRIAs allow organisations to reflect on the why, where, and how for deploying their high-risk systems. The authors describe that where these deployers are public bodies, private operators of public services, and certain other operators specified, they must conduct FRIAs prior to deployment and notify market surveillance authorities of the results. FRIAs must describe the intended use processes, time periods, affected groups, specific risks of harm, human oversight measures, and risk mitigation plans.
Steps to make the Act successful: Member of the European Parliament Axel Voss published ten suggestions to make the Act work well in practice. Voss cautioned that the Act is a rather complicated piece of legislation that risks hampering the competitiveness of the European AI ecosystem. He emphasised that in order to make the Act a true EU success story, we need to simplify compliance, avoid unnecessary bureaucracy and leave space for innovation. His recommendations are the following: 1) harmonise technical standards; 2) harmonise guidelines, model contractual terms and templates; 3) fix legal overlaps; 4) improve the governance system; 5) streamline regulatory sandboxes; 6) simplify compliance for SMEs: 7) solve the training of and access to high quality datasets: 8) develop a comprehensive AI strategy; 9) attract talent for the AI Office; and 10) suspend penalties until these steps are fulfilled.
Big challenges in AI standards: Hadrien Pouget, associate fellow at the Carnegie Endowment for International Peace, and Ranj Zuhdi, software quality and regulatory consultant, wrote an article about key challenges with the AI Act's industry-developed standards for companies to assess and mitigate risks from AI products. Pouget and Zuhdi argue that current AI standards are incomplete and immature compared to other industries, risking inconsistent enforcement and undermining the Act's aim to promote innovation through legal certainty. Three key challenges exist: 1) extending risk assessment beyond health and safety to fundamental rights like privacy and non-discrimination; 2) poorly defined safety and testing requirements unique to AI systems lacking the physical property focus of traditional standards; and 3) general-purpose AI (GPAI) models like GPT-4 with broad intended purposes intensifying existing risk assessment and mitigation challenges. The authors make several recommendations in the article, from developing guidelines for fundamental rights risk assessments to enhancing transparency in GPAI model adoption.
Jobs
The European AI Office is hiring: The Office is looking for Technology Specialists and Administrative Assistants who are encouraged to apply by 27 March. These roles will help to govern the most cutting-edge AI models. The minimum requirements for the technical role include EU citizenship, a master’s degree, 1+ year of relevant experience, and fluency in two EU languages (C1 in one, B2 in English/French/German). Contracts are for up to 6 years by extensions.
We trust on talent