The EU AI Act Newsletter #67: More Jobs at the AI Office
The European Commission is currently recruiting Legal and Policy Officers for the European AI Office.
Welcome to the EU AI Act Newsletter, a brief biweekly newsletter by the Future of Life Institute providing you with up-to-date developments and analyses of the EU artificial intelligence law.
Analyses
Analysis of the 1st Code of Practice Draft: Yacine Jernite and Lucie-Aimée Kaffee from Hugging Face evaluated the proposed measures for AI Act Code of Practice for general-purpose AI. Jernite and Kaffee state that whilst the transparency measures are generally well-directed, requiring detailed documentation and performance evaluation, improvements are needed to increase public accessibility of information and adapt language for collaborative development settings. The copyright-related measures are deemed promising, particularly in their emphasis on stakeholder collaboration and standardisation. However, the authors raise concerns about potential fragmentation and the impact on smaller developers and copyright holders. The most significant criticism targets the proposed taxonomy of systemic risks, which is considered too narrowly focused on remote hazards whilst overlooking more immediate concerns. The authors suggest a restructured taxonomy addressing, for example, three key areas: risks from inappropriate AI deployment in critical settings, information security risks at scale and risks from scaled-up abuse. This revised framework aims to better support collaborative, evidence-based solutions and accommodate both large and small-scale developers.
Transparency around AI training data: Zuzanna Warso, Director of Research at Open Future, and Maximilian Gahntz, AI Policy Lead at the Mozilla Foundation, wrote an op-ed in Tech Policy Press about the EU AI Act mandating that developers of general-purpose AI models must publish a "sufficiently detailed summary" of their training data. This requirement aims to protect various legitimate interests, extending beyond copyright to include privacy rights, academic freedom, anti-discrimination, cultural diversity, fair competition and consumer protection. The EU AI Office is currently developing a template for this summary, addressing key questions about required information and the definition of "legitimate interest". However, AI developers are resisting comprehensive disclosure, citing trade secret concerns. Warso and Gahntz argue that these trade secret claims require careful scrutiny and should not serve as a blanket excuse for lack of transparency. They emphasise that the requirement is for a summary of training data, not the data itself, and that the EU Trade Secrets Directive allows for public interest considerations to override trade secrecy.
The AI Act’s impact on security law: Christian Thönnes, Doctoral Researcher at the Department of Public Law of the Max Planck Institute, introduced a debate series on the AI Act's impact on security law in Verfassungsblog. The AI Act represents the world's first comprehensive AI law, with significant implications for security matters including border control, financial monitoring, anonymity rights and criminal justice. Despite the EU's limited authority over national security, as established in European Treaties, it has leveraged its legislative powers over the internal market and data protection to regulate how security agencies use modern technologies. This development reflects the increasingly transnational nature of security threats and the growing reliance of modern policing on technology. Thönnes argues, however, that the integration of European security law remains imperfect, creating opportunities for legal scholarship to identify and address gaps in the new security architecture. Key challenges include reconciling national security exceptions with EU oversight, harmonising market regulations with national security standards, and addressing specific concerns like real-time remote biometric identification systems.
Clarifying the AI definition: The European Law Institute provided a response to the European Commission's consultation on the AI definition in the AI Act. Article 3(1) of the Act provides a definition of AI systems that is central to its regulatory framework. However, according to the response, this definition, based on the OECD's November 2023 revision, lacks clarity in distinguishing AI from other IT systems. The European Law Institute proposes a 'Three-Factor Approach' for identifying AI systems: 1) the amount of data or domain-specific knowledge used in development; 2) the system's ability to create new know-how during operation; and 3) the formal indeterminacy of outputs, where human discretion would normally apply. These factors operate in a flexible system where strength in one area can compensate for weakness in another. Generally, an IT system qualifies as AI when scoring at least three pluses across these factors, requiring presence in at least two categories. This interpretation acknowledges the AI Act's intentionally abstract definition while aiming to balance technical neutrality with practical applicability in categorising AI systems.
Learning from model deployment guidance: Thalia Khan and Madhulika Srikumar from Partnership AI presented three key recommendations for the EU AI Office and other policymakers crafting foundation model guidelines, drawing from their experience developing Model Deployment Guidance. Firstly, guidance should be iterative, allowing for updates as foundation models become more widespread. Public feedback highlighted that safe development responsibilities now extend beyond model providers alone, leading to expanded guidance for open foundation models. Secondly, guidelines should be tailored to specific model and release types. Different approaches are needed for frontier models (paradigm-shifting general-purpose AI) versus research releases or closed deployments. To facilitate this, they have published specific checklists for three scenarios: restricted frontier models requiring extensive safety measures, open advanced models emphasising collaborative governance and closed frontier models for internal deployments. Finally, Khan and Srikumar argue that governance should extend beyond model providers to include the entire AI value chain, encompassing model adapters, hosting services and application developers. This broader approach ensures shared responsibility for safe AI development and deployment across all stakeholders.
Jobs
The AI Office is hiring Legal and Policy Officers: The European Commission is currently recruiting Legal and Policy Officers for the European AI Office through two open calls for expression of interest. These roles offer an opportunity to influence the development of trustworthy AI within the EU. Policy Officer candidates must have at least three years of experience in EU digital policies, demonstrate strong analytical and research capabilities, and be able to convert findings into practical policies. Legal Officer applicants should possess a minimum of three years of experience in EU digital legislation, alongside excellent analytical and communication abilities. The positions offer monthly salaries ranging from €4,100 to €8,600, with limited tax obligations. Interested candidates must submit their applications by 15 January 2025.