Welcome to the EU AI Act Newsletter, a brief biweekly newsletter by the Future of Life Institute providing you with up-to-date developments and analyses of the proposed EU artificial intelligence law.
Legislative Process
Facial recognition controversy: Gian Volpicelli from POLITICO reported that the AI Act, initially agreed upon in early December, has undergone last-minute changes that would allow law enforcement to use facial recognition technology on recorded video without judicial approval. German MEP Svenja Hahn criticised these modifications in the final text, calling them an attack on civil rights and likening the potential misuse of biometric technology to practices in authoritarian states like China. She argues that the changes, finalised on 22 December, diverge from the original agreement, which required stricter conditions and judicial oversight for facial recognition use. Hahn highlighted concerns about post facial recognition technology, which deals with pre-existing footage, as opposed to real-time public space surveillance that would be largely outlawed. While some, including Parliament's leading negotiator Dragoș Tudorache, defend the text, others like Patrick Breyer from the German Pirate Party, and representatives from digital rights groups, echo Hahn’s criticism. EU governments will review the final text on 24 January, aiming for approval on 2 February, followed by a Parliamentary vote. Potential amendments would require additional legislative work.
Analyses
Dutch AI supervision plans: The Dutch Data Protection Authority (AP) published its second AI and Algorithmic Risks Report, which shares a national master plan for the Netherlands aiming for effective control over AI and algorithm use by 2030, involving collaboration among companies, government, academia, and NGOs. The strategy includes annual goals and agreements and integrates regulations like the AI Act. The Act, effective from 2025, will provide oversight of foundational models and developers, addressing risks like disinformation, manipulation and discrimination. Supervisors in the Netherlands are preparing for AI Act supervision, which was politically agreed upon in December 2023. However, effective control of AI and algorithms extends beyond supervision, requiring proactive risk management and internal controls within companies and organisations for reliable and safe AI use. Aleid Wolfsen, Chair of the AP, notes that the more AI and algorithms are being used in society, the more incidents seem to occur, emphasising the need for immediate risk management, particularly as 75% of Dutch organisations plan to use AI in workforce management. He highlights the necessity of robust supervision and regulation to maintain trust in AI and protect fundamental rights.
Regulating foundation models: Cornelia Kutterer, Research Fellow at the Chair on the Legal and Regulatory Implications of Artificial Intelligence at MIAI Grenoble Alpes, wrote an extensive article on regulating foundation models in the AI Act. Kutterer says that the provisional agreement on general purpose AI (GPAI) and foundation models introduces a new risk category, systemic risks, expanding the existing categories in the Act. Under this agreement, all GPAI models require regular updates of technical documentation, including training and testing details, and providers must help AI system integrators understand the models' capabilities and limitations as well as comply with the regulation. They must also comply with EU copyright law, share training content summaries, and cooperate with regulatory authorities. Models posing systemic risks have additional obligations: evaluating and mitigating such risks, monitoring and reporting serious incidents, taking corrective actions, and ensuring robust cybersecurity. The agreement maintains a risk-based approach but expands it to include systemic risks, reflecting AI technology advancements. The proposal addresses open-source AI models, exempting them unless they pose systemic risks. This approach aims to balance safety concerns and the benefits of knowledge sharing within the community, navigating tensions between understanding AI model performance and mitigating potential risks.
Perspectives in the music industry: Daniel Tencer, Deputy Editor at Music Business Worldwide, reviewed the AI Act from the perspective of the music industry. Tencer states that the Act is a crucial piece of legislation for the music industry, particularly regarding copyright infringement and transparency in AI training. Rightsholders, including the global music industry representative IFPI, are cautiously optimistic about the Act. The Act seems to support rights holders by suggesting that using copyrighted materials for AI training requires their permission. This is subject to certain exceptions, however, notably for scientific research, introducing complexity and potential loopholes. An area of concern for the music industry is the Act's "opt-out" system, which shifts the burden to rights holders to forbid the use of their material in AI training. This contrasts with the preferred "opt-in" system where AI developers would by default need to obtain licenses beforehand. The Act indicates that a summary of data sources might be sufficient for compliance, which could be problematic given the vast amount of data in sources like Common Crawl, used in AI training. Overall, while the AI Act is seen as a positive step, entities like GEMA and Warner Music Group CEO Robert Kyncl suggest it needs further technical refinement, with some preferring stricter regulations.
Generative AI and watermarking: Tambiama André Madiega, Policy Analyst at the European Parliamentary Research Service, wrote a briefing on generative AI and how it is being regulated around the world. While tools like ChatGPT, GPT-4, and Midjourney facilitate content generation, they raise concerns of plagiarism, privacy issues, AI hallucination (providing false information convincingly), copyright infringement, and disinformation. The challenge of distinguishing between AI-generated and human content is a growing policy issue. Policymakers and AI practitioners are exploring ways to increase the transparency and accountability of generative AI, including content labelling, automated fact-checking, forensic analysis, and watermarking to clarify AI content's origins. The EU's AI Act imposes obligations on AI system providers and users to label AI-generated content and disclose its artificial nature, better informing user decisions. These systems must also mark synthetic content in a machine-readable format. GPAI models must meet transparency obligations, respect EU copyright law using advanced technologies, and provide detailed summaries of copyrighted content used in training. Additionally, generative AI providers must disclose AI-generated content and prevent illegal content creation, likely by employing watermarking techniques.
The report about the risks and effects of the use of AI & algorithms in the Netherlands is very complete and interesting. Thank you very much for sharing it. Have a nice weekend!