The EU AI Act Newsletter #60: California Bill Could Enhance the EU's
Turing Award-winning AI expert Yoshua Bengio recommends that the EU take inspiration from the California AI safety bill as they implement their own AI regulation.
Welcome to the EU AI Act Newsletter, a brief biweekly newsletter by the Future of Life Institute providing you with up-to-date developments and analyses of the EU artificial intelligence law.
Analysis
California bill could enhance EU AI Act: Euractiv's Jacob Wulff Wold reported on the California bill, Senate Bill (SB) 1047, which would require the developers of AI models trained with more than $100 million in computing resources to follow and disclose a Safety and Security Plan (SSP) aimed at preventing critical harm. SB 1047 defines critical harm as incidents causing mass casualties or material damages exceeding $500 million. Prominent figures like Yoshua Bengio, the Turing Award-winning AI expert, have praised the bill, recommending that the EU take inspiration from it as they implement their own AI regulation, the AI Act. The European Commission has been closely monitoring the Californian bill and has met with Senate representatives in early summer to exchange views. Some experts argue that if SB 1047 is passed, it could strengthen the EU’s regulatory approach by reducing the perceived bias against US firms and lowering compliance costs for companies operating in both California and the EU. However, tech giants Meta, OpenAI and Google have opposed the bill, claiming it could jeopardise open-source AI and drive innovation out of California. Critics argue that the bill focuses on regulating technology itself rather than its applications.
Who will lead the GPAI code of practice? Kai Zenner, the Head of Office and Digital Policy Adviser for MEP Axel Voss, and Cornelia Kutterer, the Managing Director of Considerati, published an op-ed in Euronews arguing that the chairs and vice-chairs appointed to guide the Code of Practices for General-Purpose AI (GPAI) models will play a critical role in shaping the future implementation of the AI Act. The AI Office is expected to appoint these individuals within the next three weeks. Drawing inspiration from the 2022 Code of Practice on Disinformation, the EU has adopted a co-regulatory approach for AI safety, considering the fast-evolving technology, sociotechnical values and the complexities of content policies and moderation decisions. However, some are concerned that companies might only commit to the bare minimum. The independence and expertise of the chairs are therefore crucial for maintaining the credibility and balance of the drafting process. Zenner and Kutterer state that the selection process should focus on candidates with strong technical, sociotechnical or governance expertise, combined with experience on how to run committee work. In addition to strong EU representation, involving internationally renowned experts in these roles could enhance the legitimacy of the GPAI Code and encourage non-EU companies to align with the process.
Actions for civil society and funders on the enforcement of the AI Act: A report commissioned by the European AI & Society Fund and carried out by the European Center for Not-for-Profit Law (ECNL) highlights the role civil society can play in shaping the implementation of the Act. The coming months offer opportunities for civil society to influence outcomes in favour of public interest, fundamental rights, and protection for the most vulnerable, including ensuring that bans on the most harmful AI systems are tightly drawn, that systemic risks are addressed for products like ChatGPT, and that exemptions in areas like national security and migration are limited. The report draws lessons from the Digital Services Act (DSA), demonstrating that civil society can effectively shape the implementation of a law and help to hold companies to account. To succeed, civil society organisations (CSOs) need structured coordination mechanisms to engage with relevant institutions, such as the AI Office and the European Data Protection Supervisor. Additionally, CSOs require research capacity, technical expertise and fundamental-rights based legal analysis for advocacy around prohibitions, risk designations, exemptions and other issues.
General-purpose AI model requirements: Stanford’s Center on Research of Foundation Models published an overview of general-purpose AI requirements. At the start, they emphasise that the AI Act is a key policy priority for them and that they will continue to engage with the EU through the AI Office and Scientific Panel. Stanford’s analysis identifies 25 requirements for general-purpose AI, most of which are disclosure obligations, where developers provide information to governments or downstream firms. Public-facing disclosure is minimal: only one requirement for publicly disclosing a summary of training data could enhance transparency. Substantive requirements mainly apply to models deemed to pose systemic risks, necessitating actions like model evaluation, risk mitigation, incident reporting and cybersecurity protection. Currently eight models, from Google, Meta, OpenAI and a few other companies, would meet the criteria for systemic risk based on training compute. The Act also offers partial exemptions for open-source models, though not for those models with systemic risks. As other jurisdictions, including the US, develop policies for open foundation models, the role of licenses remains a key consideration.
How different stakeholders are thinking about compliance: Caitlin Andrews, Staff Writer for the IAPP, wrote an article outlining industry leaders’ concerns about the uncertainty surrounding the integration of the AI Act into the EU's regulatory framework, urging governing bodies to clarify details well before compliance deadlines. Marco Leto Barone of the Information Technology Industry Council, which includes major companies like Microsoft, IBM and Anthropic, highlights the need for clarity about how the Act interacts with existing regulations, such as the General Data Protection Regulation and the Cyber Resilience Act. The council advocates for coordinated efforts between EU member states and the Commission to avoid regulatory overlap and ensure predictability for the AI industry. Different countries are taking varied approaches to designating authorities responsible for enforcement, raising concerns about uniformity. For example, Spain has established a new AI supervisory agency, while Germany and Denmark have adapted existing bodies. Businesses, like Roche, face challenges in determining how their diverse AI portfolios fit within the Act's requirements. Roche's Global Head of Digital Health, Johan Ordish, said that there is a challenge to comply without painting all AI systems with the same brush.
Thank you, as always, for the continuously updated news. I'm eagerly looking forward to reading more about the AI Supervisory Agency in Spain, which officially began its operations today, September 2nd: https://coiiclm.org/la-aesia-empezara-a-funcionar-el-2-de-septiembre-en-a-coruna