The EU AI Act Newsletter #55: First-Ever Meeting of the AI Board
The AI Board met for the first time to lay the groundwork for the implementation of the AI Act.
Welcome to the EU AI Act Newsletter, a brief biweekly newsletter by the Future of Life Institute providing you with up-to-date developments and analyses of the EU artificial intelligence law.
Legislative Process
AI Board's first meeting: The inaugural meeting of the upcoming AI Board convened on 19 June at the European Commission building, ahead of the AI Act's formal entry into force expected in early August. The gathering aimed to lay the groundwork for the Act's implementation, focusing on strategic vision, national governance approaches, initial deliverables, and organisational matters such as the Board's mandate and leadership selection. Attendees included high-level delegates from all EU Member States, European Commission representatives, and the European Data Protection Supervisor as an observer. EEA/EFTA members Norway, Liechtenstein, and Iceland also attended in an observing capacity. The meeting emphasised the importance of early collaboration on the AI Act's implementation. Discussions covered the Board's role, supervision strategies, and the Commission's priorities for implementation. A follow-up meeting is scheduled for early autumn, after the AI Act enters into force.
Analyses
Data transparency requirement for general-purpose AI: Zuzanna Warso and Paul Keller from Open Future together with Maximilian Gahntz from Mozilla published a proposal for implementing the AI Act's training data transparency requirement for general-purpose AI (GPAI). Article 53 1(d) in the Act requires providers of GPAI models to publish a detailed summary of training content. The summaries should cover data sources and sets as well as narrative explanations. Warso, Keller and Gahntz propose a template for these summaries, emphasising comprehensive scope and sufficient technical detail to benefit both experts and laypeople. The summaries should list primary data collections and provide narrative explanations of other data sources, and clearly distinguish 'data sources' (origins) and 'datasets' (processed data points). This transparency requirement aims to strengthen individuals' and organisations’ ability to exercise their rights, enable the research and scrutiny of one of the key inputs in the AI development process and enhance accountability across the AI industry.
General-purpose AI interpretation: Senior AI Correspondent Luca Bertuzzi at MLex wrote an op-ed on how the regulations for general-purpose AI (GPAI) models might ripple down the value chain. Bertuzzi states that the rules of GPAI models were meant to apply to the likes of OpenAI, Anthropic and Mistral, but the law's preamble suggests a broader scope, extending obligations to anyone who modifies or fine-tunes GPAI models, proportionate to the level of modification. He says that European companies did not expect this expansion of duties down the value chain. He adds that the implications for downstream operators modifying 'systemic risk' models remain unclear, particularly regarding compliance requirements like red-teaming exercises or risk mitigation systems. While GPAI model providers can demonstrate compliance through recognised codes and standards, it is uncertain if downstream operators will be involved in developing these tools. The European Commission's AI Office may struggle to monitor all model fine-tuners, and thus focus on top providers, only investigating the entire value chain when issues arise. Alternatively, they might examine samples of fine-tuned models and rely on scientific community input to identify potential societal risks.
Assessing pros and cons: Daan Juijn, Emerging Tech Foresight Analyst, and Maria Koomen, Governance Lead, both at ICFG, asked in Tech Policy Press whether the AI Act is a regulatory exemplar or a cautionary tale. Juijn and Koomen write that the Act's key strengths include the following: 1) prohibiting certain AI systems, such as social credit scoring, to prevent erosion of social norms; 2) setting minimal requirements for AI use in sensitive sectors to prevent harmful deployment and promote human-centric AI development; 3) regulating general-purpose AI models, and recognising their widespread impact and the limitations of downstream providers; and 4) adopting a pragmatic risk classification approach, particularly for general-purpose AI models, using compute thresholds to differentiate risk levels. Yet in the authors' view the Act also has many shortcomings. It risks becoming ineffective due to unclear rules and insufficient enforcement capacity. Equally, the requirements for high-compute general-purpose AI models have potential loopholes, and may quickly become outdated. To future-proof the Act, they suggest that the EU drafts clearer rules, improves enforcement strategies, and includes additional safeguards for next-generation general-purpose AI models.
Updates from European AI standards writing: CEN-CENELEC technical committee JTC21 is making progress on writing the AI Act-related standards. Its strategic advisory working group is resolving comments on the "architecture of standards" document. The operational aspects working group is developing new work items on conformity assessment and quality management systems. They are also creating a risk management standard with a catalogue with sources of AI risk, their potential consequences and risk management measures. The engineering aspects working group is working on standards for quality and governance of datasets, bias management, natural language processing and logging. The foundational and societal aspects working group is developing a trustworthiness framework and exploring impact assessments for fundamental rights. Stakeholders are encouraged to provide input to JTC21 on their expectations for trustworthy AI.
General-purpose AI model evaluation: Marius Hobbhahn, the CEO and Co-founder of Apollo Research, published an op-ed in Euractiv discussing how the AI Act's governance of general-purpose AI models with systemic risk heavily relies on evaluations. Hobbhahn argues that maturing the evaluation sector is crucial for successful implementation, because valuations are increasingly central to governance frameworks proposed by governments and leading AI companies. Evaluations aim to identify AI systems' capabilities and propensities for potential risks and mitigation strategies before high-stakes deployment. The Act requires providers of general-purpose AI models with systemic risk to conduct, pass, and document evaluations, with compliance achievable through adherence to codes of practice. To support trust in the evaluations ecosystem, Hobbhahn recommends three key areas of focus: 1) strengthening the field's scientific rigour by focusing on what evaluations can and cannot achieve; 2) empowering the EU AI Office with adequate oversight and adaptability through the scientific panel, codes of practice and reporting on evaluation successes and failures; and 3) planning for future challenges by raising safety standards, mandating independent evaluations, overseeing evaluators, and developing evaluations for future AI capabilities.
Currently, there are three main groups internationally focused on AI legislation: Germany (conservative), Europe excluding Germany (radical), and the United States (pragmatic).
Germany's legislative approach believes AI is in its infancy, with many unknowns, so premature legislation might hinder industry development.
European countries outside Germany are generally more radical, advocating for early establishment of industry norms to prevent AI-related risks.
The United States has enacted laws like the "Future of AI Act," focusing on encouraging development with fewer restrictions.
In my opinion, current AI legislation should consider the following aspects:
1. Promoting AI industry development and establishing industry norms;
2. Privacy protection (ubiquitous data capture poses threats to data privacy), addressing privacy issues from machine self-learning;
3. Preventing commercial fraud and deception (e.g., robot shows on TV suspected of being controlled backstage, not autonomous robots);
4. Preventing the misuse of technology (e.g., the abuse of robot telemarketers);
5. Weighing the opportunities and risks of machines making decisions for humans;
6. Exploring whether to grant robots personhood from a legal perspective;
7. Determining liability for damages caused by robots;
8. Addressing ethical issues from the integration of humans and intelligent machines (e.g., ethical concerns from using AI to enhance abilities of individuals with intellectual disabilities);
9. The issue of AI taking over human jobs;
10. Intellectual property issues related to AI creations.