The EU AI Act Newsletter #57: Bad for Innovation?
EU AI Act's compliance costs and lack of clarity cause concern among tech start-ups ahead of next week's entry into force.
Welcome to the EU AI Act Newsletter, a brief biweekly newsletter by the Future of Life Institute providing you with up-to-date developments and analyses of the EU artificial intelligence law.
Legislative Process
It is official! The EU AI Act has been published in the Official Journal of the European Union. It will now enter into force on 1 August.
Parliament's possible oversight body: According to Euractiv's Eliza Gkritsi and Théophane Hartmann, the European Parliament is considering establishing a monitoring group for the implementation of the AI Act. MEPs have voiced concerns about the European Commission's approach to key tasks. Brando Benifei, formerly a rapporteur on the Act, announced plans to formalise a parliamentary monitoring group, in which he expects to play a leading role. Benifei warned of potential risks if the Commission fails to implement the Act properly, particularly regarding transparency provisions. Some MEPs have urged the Commission to involve civil society in developing codes of practice for general-purpose AI, rather than allowing companies to write them independently, and have received assurances that the former will be fully involved in the drafting. Benifei will push for independent experts to chair working groups within the AI Office, to limit large companies' influence.
US AI Safety Institute and European AI Office dialogue: On 11 July, US and EU officials initiated a technical dialogue between the European AI Office and the US AI Safety Institute in Washington, D.C. This dialogue aims to deepen bilateral collaboration on AI and foster scientific information exchange. The discussion, led by US AI Safety Institute's director Elizabeth Kelly and European AI Office's director Lucilla Sioli, focused on three key topics: watermarking and content authentication of synthetic content, government compute infrastructure, and AI for societal good, particularly weather modelling. The dialogue included sessions with academics, civil society representatives, and government experts from various departments and agencies. Both institutions expressed interest in exploring best practices for watermarking and content provenance, promoting these tools, and furthering scientific exchange on AI safety. They reiterated their shared ambition to develop an international network to advance AI safety science, as discussed at the AI Seoul Summit.
Analyses
Could it hurt innovation? Financial Times’ Correspondent Javier Espinoza wrote (unfortunately behind a paywall) that the EU's AI legislation, aimed at fostering technological growth with clear guidelines, is causing concern among tech start-ups. Many worry that compliance costs, potentially reaching six figures for a fifty-employee company, could stifle small enterprises. Critics argue that the legislation lacks essential details, such as clear rules on intellectual property rights and a code of practice for businesses. Some estimate that sixty to seventy pieces of secondary legislation may be needed to support the Act's implementation. The rushed nature of the legislation has led to vagueness, with many issues left unresolved. One key area lacking clarity is whether AI systems like ChatGPT are acting illegally when learning from copyrighted sources. The establishment of codes of practice to guide tech companies in implementing the rules also requires additional legislation. There are concerns about potential lobbying from powerful business groups seeking to water down the rules. Additionally, the Act does not clearly specify which national agency should enforce the rules in individual member states.
Meta has issues with EU laws: As reported by Ina Fried, the Chief Technology Correspondent at Axios, Meta announced it will withhold its upcoming multimodal AI model and future models from EU customers due to regulatory uncertainty. The company cites concerns about complying with GDPR (but not the AI Act) while using data from European users to train models. Meta had previously sent more than 2 billion notifications to users in the EU about its plan to use public posts for model training, offering an opt-out option. Despite briefing EU regulators in advance, Meta claims to have received minimal feedback. The company will still release a larger, text-only version of Llama 3 in the EU. However, withholding the multimodal version could prevent European companies from using these models, even under open licence, and might affect non-EU companies offering services in Europe that utilise them. Surprisingly, Meta plans to release the multimodal model in the UK, where similar data protection laws exist, citing fewen regulatory concerns.
Data protection authorities’ role in the AI Act: The European Data Protection Board (EDPB) has issued a statement emphasising that the Act and EU data protection legislation should be viewed as complementary and mutually reinforcing. The EDPB highlights potential supervision and coordination issues arising from the designation of national competent authorities in areas closely linked to personal data protection. The statement recommends a prominent role for Data Protection Authorities (DPAs) in the emerging enforcement framework, citing their experience and expertise in AI-related issues. The EDPB suggests that designating DPAs as Market Surveillance Authorities (MSAs) would benefit stakeholders by providing a single contact point. The EDPB recommends that Member States designate DPAs as MSAs for high-risk AI systems mentioned in Article 74(8) of the Act.
Responsibilities of the AI Office: Kai Zenner, Head of Office and Digital Policy Adviser for MEP Axel Voss in the European Parliament, published a blog post highlighting the 130 responsibilities that the AI Office has been assigned under the AI Act. These responsibilities are divided into four main categories: 1) 39 tasks to establish an AI governance system, to be implemented between 21 February 2024 and 2 August 2026; 2) 39 pieces of secondary legislation, including Delegated Acts, Implementing Acts, guidelines, templates, Codes of Practice, Codes of Conduct, and a standardisation request – some of which have clear deadlines, while others are at the Commission's discretion; 3) 34 different categories of EU-level enforcement activities, some beginning on 2 February 2025; and 4) 18 tasks for ex-post evaluation of the law, to be carried out between 2025 and 2031. This comprehensive list aims in particular to support civil society, academics, and SMEs that do not have the necessary resources to monitor the whole implementation of the Act.
Thank you, interesting and very helpful