The EU AI Act Newsletter #74: Human Rights Are Not Optional
Architects of the AI Act have urged Brussels to halt "dangerous" watering down of rules that would exempt major US tech companies like OpenAI and Google from key regulatory requirements.
Welcome to the EU AI Act Newsletter, a brief biweekly newsletter by the Future of Life Institute providing you with up-to-date developments and analyses of the EU artificial intelligence law.
Legislative Process
AI Board convenes for its third meeting: The AI Board, comprising senior representatives from EU Member States, met on 24 March under the chairmanship of Mr Dariusz Standerski, Polish Secretary of State at the Ministry of Digital Affairs. In the morning, Executive Vice-President Henna Virkkunen presented the European Commission's latest strategic initiatives and priorities. Member States shared their AI Act implementation approaches during a roundtable discussion. The afternoon agenda included decisions on joint communication efforts and AI Act compliance support. The AI Office provided updates on recent deliverables including on AI system definitions and prohibition guidelines, the third draft of the Code of Practice for general-purpose AI, and the scientific panel call. A technical briefing from the AI Office Safety team informed Member States about recent technological advancements and regulatory challenges. The meeting's outcomes will guide next steps in implementing the AI Act.
German coalition disagrees on AI regulation: According to Jacob Wulff Wold from Euractiv, leaked documents reveal disagreement between the centre-right Christian Democrats (CDU/CSU) and the centre-left Social Democratic Party (SPD) working groups on AI regulation and digital sovereignty in negotiations for a new German government platform. CDU/CSU advocates revising the AI Act "to reduce burdens on the economy" and aims to create a foundation for combining future data legislation. Meanwhile, the SPD remains "committed to an AI Liability Directive at the European level". Both parties support regulation to accelerate data centre development, though CDU/CSU specifically wants to amend existing regulations.
EU lawmakers warn against dangerous moves to water down AI rules: According to Melissa Heikkilä and Barbara Moens at Financial Times, architects of the AI Act have urged Brussels to halt "dangerous" moves to water down the rules that would exempt major US tech companies like OpenAI and Google from key regulatory requirements. The European Commission is considering making more of the Act voluntary rather than mandatory. This includes provisions designed to force AI companies to ensure cutting-edge models do not produce violent and false content or get used in election interference, following intense lobbying from Donald Trump and Big Tech companies. Several prominent MEPs involved in AI regulation have written to digital chief Henna Virkkunen, warning that adhering to such demands is "dangerous, undemocratic and creates legal uncertainty." The letter states that if providers of the most impactful general-purpose AI models were to adopt more extreme political positions or facilitate election manipulation, the consequences "could deeply disrupt Europe's economy and democracy". Signatories include most MEPs who negotiated the AI Act and former Spanish digitalisation minister Carme Artigas, who led member state negotiations.
Analyses
The Code shouldn't make human rights optional: Laura Lazaro Cabrera at the Centre for Democracy and Technology Europe, Laura Caroli at the Wadhwani AI Center at the Center for Strategic and International Studies, and David Evan Harris at the University of California, Berkeley published an op-ed in Tech Policy Press expressing grave concern that the penultimate draft of the Code of Practice fails to protect human rights by dramatically limiting the risk mitigation requirements for AI developers. The draft has shifted from a two-tier risk approach to one where previously "additional" risks are now labelled "optional". This optional category now includes an alarming range of serious concerns: public health and safety risks, fundamental rights issues (including freedom of expression, discrimination, privacy and child protection), and societal risks (for example, to the environment or democracy). Human rights risks do not appear to be among the main systemic risks from powerful general-purpose AI models. Cabrera, Caroli and Harris highlight, however, that discrimination and other issues are already well-documented in AI models, stemming from training data. The new approach contradicts both the AI Act's intent and international frameworks like the Hiroshima Code of Conduct for Advanced AI Systems, which explicitly require assessment and mitigation of privacy and discrimination risks.
Hungary’s use of facial recognition likely violates AI Act: According to Anupriya Datta at Euractiv, Viktor Orbán's latest amendment to the Hungarian Child Protection Act, which plans to use facial recognition systems against pride event participants, likely violates EU data protection and AI laws. Under the AI Act, real-time facial recognition for police monitoring in public areas is generally prohibited, with exceptions only for national security threats or terrorism. The Hungarian proposal would ban pride events, claiming they violate the Child Protection Act, and authorise police to use facial recognition to identify participants. Dr Laura Caroli, who negotiated the AI rules for the European Parliament, explained this use is "actively prohibited by the AI Act" under Article 5, which prevents member states from misusing live facial recognition. She argues that even if Hungary invoked national security or terrorist reasons, they would still violate the Act.
Is web scraping the only copyright concern for AI? Paul Keller, Director of Policy at Open Future, wrote that while the third draft of the Code of Practice shows improvements for open source AI developers, the ambitions of the copyright chapter have been significantly reduced. A curious limitation has emerged: the requirement "to put in place a policy to comply with Union law on copyright" now mostly addresses only data obtained by "crawling the World Wide Web". The changes include the deletion of all key performance indicators for individual measures and watered-down public disclosure requirements for copyright policies. The limitation of rights reservations compliance to web-crawled data creates problematic gaps. This would not cover scenarios like Meta's documented use of pirated books. Many data acquisition methods beyond web crawling would fall outside these commitments. Reducing copyright compliance to web crawling contexts is illogical, as the real concerns arise from how acquired data is used for training AI models.
Sounds like everyone is stuck in a pickel
Indeed, it appears that the legislators find themselves entangled in a most unfortunate situation—one that has, quite remarkably, left them paralyzed by a blend of hesitation and a profound lack of understanding of the very subject they seek to regulate.