The EU AI Act Newsletter #54: AI Office Hiring
The AI Office is hiring seconded national experts who could be responsible for carrying out innovation, supervision and enforcement tasks.
Welcome to the EU AI Act Newsletter, a brief biweekly newsletter by the Future of Life Institute providing you with up-to-date developments and analyses of the proposed EU artificial intelligence law.
AI Office
Seconded national experts to be hired for Policy Specialists: The EU AI Office has put out a call for Seconded National Experts who can become Policy Specialists. The seconded national experts could be responsible for carrying out innovation, supervision and enforcement-relevant tasks for the AI Office. The deadline is 25 July. Joining the AI Office presents an opportunity to oversee general-purpose AI models and ensure they adhere to the requirements of the AI Act. As part of the team, the candidates will act as a liaisons to the scientific community, aid in enforcing the Act, and establish the AI Office as a global benchmark. The role also involves fostering collaboration with similar institutions in other juristictions to advance understanding of AI governance. Requirements include three years of relevant experience, with at least one year at the current employer; expertise in EU digital and AI policies; employment by a national, regional or local administration, an intergovernmental organisation, or a public sector body, university or independent research institute; and proficiency in one EU language and satisfactory knowledge of another.
AI Office structure announced: The AI Office is composed of several units: 1) Regulation and Compliance, to ensure uniform application of the AI Act, 2) the AI safety unit, to identify risks and mitigation measures of very capable general-purpose models, 3) the Excellence in AI and Robotics unit, to support R&D, 4) AI for Societal Good, to engage in beneficial AI projects, and 5) AI Innovation and Policy Coordination, to oversee EU AI strategy, including investments, uptake of AI and regulatory sandboxes. Led by a head and advised by scientific and international affairs experts, the office will employ over 140 staff, including specialists in technology, law, policy, and economics. It will enforce the AI Act, work with developers and experts to develop codes of practice, and collaborate with member states and stakeholders. Organisational changes will take effect on 16 June, with the AI Board convening by the end of June. The office is preparing guidelines on the AI system definition and on prohibitions, both due six months after the AI Act enters into force, and codes of practice for general-purpose AI models within nine months of the Act's entry into force.
Why work at the AI Office? The Future of Life Institute (us!) published a summary of reasons for working at the AI Office on the AI Act website. The role offers an opportunity to lead global responsible AI governance by enforcing the world's first comprehensive binding AI regulation. With the AI Office's pioneering position in a large consumer market, candidates could influence global AI standards on model evaluations. Opportunities include promoting AI safety across 27 EU nations and beyond, collaborating with international partners, and enforcing compliance with corrective actions on non-compliant AI models. Working within a multidisciplinary environment, candidates can engage with specialists in tech, law, ethics, and more, contributing to frontier research on AI risk assessments and mitigations. This role provides a chance to make a significant public service impact, contributing to policies affecting millions of people. With considerable growth plans, there are opportunities for career development and leadership roles in global AI governance.
Analyses
Human vulnerability in the AI Act: Gianclaudio Malgieri, Associate Professor of Law at Leiden University, published an article in Oxford University Press' blog about how the concept of ‘human vulnerability’ is used in the Act. Malgieri starts by saying that the digital revolution has heightened concerns about vulnerabilities in online interactions, including addictive social media, manipulative commercial practices, and data exploitation. The EU's General Data Protection Regulation (GDPR) acknowledges the increased vulnerability of online users as the first major digital regulation. Despite numerous references to vulnerability, the AI Act lacks a clear definition, though Article 7(h) suggests vulnerability as a contextual, relational concept influenced by power imbalances and personal or social factors. This implies vulnerability is not a fixed label but a nuanced condition.
Implications of compute and capabilities: Daan Juijn, Emerging Tech Foresight Analyst at The International Center for Future Generations, published a report arguing that advanced AI systems are rapidly evolving and that with AI companies increasing compute budgets, breakthrough systems like highly proficient autonomous agents could emerge, rejuvenating European economies but posing new risks like cyber-attacks or large-scale accidents. The author states that the European Parliament recently approved the landmark AI Act, but it may be insufficient to curb risks from upcoming models. He makes the following recommendations to future-proof the Act: 1) allocate EU compute resources strategically to AI safety and develop specialist AI systems that can help tackle societal problems, 2) extend the AI Act's general-purpose AI regulation toward severe systemic risk from next generation models, 3) scale and prioritize enforcement efforts in alignment with compute trends, 4) establish an AI foresight unit, and 5) implement a multilateral compute oversight system, focusing on large AI clusters and training runs.
Overcoming big tech AI merger evasions: Jacob Schaal, Policy Trainee at Pour Demain, and Tekla Emborg, Fellow at EU Tech Policy Fellowship, proposed adjustments to the European Commission merger rules in Verfassungsblog. The suggested changes would enhance oversight of those providing AI models with systemic risks by integrating classifications from the AI Act, notably those of high-risk systems, into antitrust considerations. The authors say this ensures appropriate competition oversight and safeguards citizens' rights from systemic AI risks. Schaal and Emborg argue that AI models with systemic risks can be presumed large enough to qualify for (quasi-)merger investigations, increasing the Commission’s ability to tackle effectively the concentration of power in the AI supply chain. The authors add that Microsoft's investments in OpenAI and partnership with Mistral exemplify the need for such measures, stirring concern among policymakers. Vertical outsourcing in AI, like Microsoft's, currently evades merger regulations.