The EU AI Act Newsletter #72: Europe's Competitiveness
The European Tech Alliance paper "Ensuring Europe's Competitiveness" outlines steps for European companies to remain globally competitive in AI.
Welcome to the EU AI Act Newsletter, a brief biweekly newsletter by the Future of Life Institute providing you with up-to-date developments and analyses of the EU artificial intelligence law.
Legislative Process
Potential AI Act re-opening: According to Euractiv's Tech Reporter Claudie Moreau, the AI Office director Lucilla Sioli has confirmed that the AI Act will be included in the Commission's digital package, for which an impact assessment is expected by the end of 2025. Speaking on a privacy panel, Sioli clarified that rather than any "big review" of the AI Act, there is an examination planned of how to simplify compliance for SMEs. Sioli indicated to Euractiv that the assessment would likely focus on intersections with other legislation. However, uncertainty remains about whether this means reopening the Act itself. The Commission had previously announced an assessment of whether digital regulations adequately address business needs, particularly for SMEs and small midcaps. Commission spokesperson Thomas Regnier was unable to confirm any plans to reopen the AI Act.
Analyses
Ensuring Europe’s competitiveness in AI: The European Tech Alliance paper "Ensuring Europe's Competitiveness" outlines steps for European companies to remain globally competitive in AI in light of the general-purpose AI (GPAI) Code of Practice. The paper argues that an effective Code of Practice must support AI development, promote fair competition, and avoid unnecessary regulatory burdens. It proposes five key principles for policymakers: 1) avoid barriers to entry, 2) make AI use practical, 3) keep fair competition as a compass, 4) make AI rules work for all businesses, and 5) support the twin transition. Additionally, the paper recommends four concrete actions to ensure the GPAI Code remains clear, proportionate and aligned with the AI Act: 1) limit scope to GPAI providers, 2) ensure appropriate documentation, 3) avoid mandatory participation in standards-setting, and 4) implement ad-hoc external assessments.
Algorithmic discrimination under the AI Act and the GDPR: Stefano De Luca from the European Parliamentary Research Service wrote that the recent entry into force of the AI Act in August 2024 raises important questions about its relationship with General Data Protection Regulation (GDPR). While the AI Act aims to promote human-centric, trustworthy and sustainable AI that respects fundamental rights including personal data protection, there appears to be tension between the two regulations. A key objective of the AI Act is to mitigate discrimination and bias in high-risk AI systems. The Act therefore permits the processing of 'special categories of personal data' under certain conditions with privacy-preserving measures. However, GDPR takes a more restrictive approach to processing such sensitive data. This creates legal uncertainty about how the AI Act's provisions on processing special categories of personal data to avoid discrimination should be interpreted. The GDPR's limitations may prove restrictive in an AI-dominated environment characterised by mass data processing across economic sectors. Addressing these issues may require legislative reform of the GDPR or additional guidelines clarifying the interplay between the two regulations.
The EU should be bolder on tech regulation: Mario Mariniello, Non-resident fellow at Bruegel, published an article discussing how despite no stated plans to ease enforcement, the Commission has indicated its intention to pause tech regulation and reduce compliance burdens, with Commissioner Henna Virkkunen promising a "more innovation-friendly" AI framework rather than challenging US interference. Mariniello argues that claims of EU over-regulation are exaggerated. For European companies, the primary investment barriers are skilled staff shortages, energy costs and uncertainty, with regulation ranking fourth. Most regulatory burden likely stems from national rather than EU legislation, and concerns about under-enforcement are more prevalent at the EU level. The EU regulatory framework is far from perfect. The general data protection regulation, for example, could be revised to address the disproportionate burden it imposes on entities relative to their potential to violate privacy (arguably, smaller companies should not have comparable duties to large tech players). But instead of apologising for regulation, the Commission should promote its tech laws to show that the EU digital environment is stabler and more resilient. It should intensify antitrust enforcement and merger control to increase competition, apply the DMA and DSA strictly, strengthen the AI Act and build a robust ethical framework for new technologies. Long-term, a well-enforced regulatory framework will bring competitive advantages, reduce uncertainty and attract talent.
Small businesses’ guide to the AI Act: According to a summary from the Future of Life Institute, the Act demonstrates significant focus on small and medium-sized enterprises (SMEs), mentioning them 38 times compared to 7 mentions of 'industry' and 11 of 'civil society'. The legislation includes numerous provisions specifically designed to support and simplify SME compliance. Key SME-focused measures include regulatory sandboxes that provide priority free access and testing facilitation. The Act addresses financial concerns by ensuring proportional assessment fees and committing to regular reviews to reduce compliance costs. SMEs can benefit from facilitated participation in standard setting and the AI advisory forum. The Commission will develop simplified technical documentation forms that national authorities accept for conformity assessments, alongside SME-tailored training activities to support compliance. The legislation also establishes dedicated communication channels for guidance and query responses. Finally, obligations for general-purpose AI model providers will be proportionate to provider type, with separate Key Performance Indicators for SMEs under the Code of Practice.
EU accused of leaving a copyright loophole in AI Act: The Guardian's Brussels correspondent Jennifer Rankin reported that Axel Voss, a key architect of EU copyright law, has warned that an "irresponsible" legal gap in the AI Act leaves creative professionals vulnerable. His concerns were echoed by 15 cultural organisations who wrote to the European Commission warning that draft implementation rules were weakening copyright protections. Voss, a German MEP who played a central role in crafting the EU's 2019 copyright directive, explained that this legislation was not designed to address generative AI models that can create text, images or music from simple prompts. He expressed frustration that EU lawmakers had failed to secure strong copyright protections during AI Act negotiations. The situation is complicated by a text and data mining exemption that Voss claims has been misinterpreted. Originally intended for limited private use, it now potentially allows major companies to harvest vast amounts of intellectual property.