The EU AI Act Newsletter #75: AI Continent Action Plan
The European Commission has introduced the AI Continent Action Plan to leverage EU strengths like talent and strong traditional industries as AI accelerators.
Welcome to the EU AI Act Newsletter, a brief biweekly newsletter by the Future of Life Institute providing you with up-to-date developments and analyses of the EU artificial intelligence law.
Legislative Process
Commission sets course for Europe's AI focus: The European Commission has introduced the AI Continent Action Plan to leverage EU strengths like talent and strong traditional industries as AI accelerators. The plan focuses on five pillars: 1) building a large-scale AI data and computing infrastructure ; 2) increasing access to large and high-quality data; 3) developing algorithms and fostering AI adoption in strategic sectors; 4) strengthening AI skills and talents; and 5) regulatory simplification. The AI Act has been adopted to create conditions for a functioning single market by ensuring cross-border circulation and harmonised market access conditions. It guarantees AI developed and used in Europe is safe, respects fundamental rights, and maintains high quality. Implementation success depends on how practical the rules are. The Commission is launching the AI Act Service Desk as an information hub, offering tailored guidance particularly beneficial to smaller providers. The Commission will also identify further measures to facilitate smooth application of the AI Act, especially for smaller companies, using feedback from the Apply AI Strategy public consultation.
MEPs demand strict AI Act open source definition: Euractiv's Jacob Wulff Wold reported that thirty progressive MEPs have cautioned the Commission against diluting the definition of "open source" AI to include models with restrictive licensing in AI Act implementation. In a Thursday letter led by Birgit Sippel (S&D) and Markéta Gregorová (Greens), the lawmakers warn that a weak definition "would risk undermining the implementation of the AI Act, putting citizens' rights at risk, and harming European Competitiveness." The MEPs specifically criticise Meta's approach to "open source AI", noting that the company prohibits using its Llama models to train other AI systems and requires special licensing for successful AI systems based on Llama. "Their AI is only free and open until a business wants to compete with them," the MEPs state, urging the Commission to clarify that such systems cannot qualify as open source under the AI Act. The lawmakers endorse the Open Source Initiative's definition, which explicitly excludes Meta's models, and call for guidance on open source AI within the Act.
New survey to gather practices for the AI literacy repository: The AI Office has established a living repository of AI literacy practices following the application of Article 4 of the AI Act. This repository already contains over twenty practices from AI Pact organisations. To expand representation, the AI Office has launched a new survey open to all organisations wishing to share their AI literacy initiatives, particularly those related to Article 4 of the Act. The AI Office will regularly verify that all contributions meet transparency and reliability criteria before adding them to the public repository. This resource aims to facilitate learning and exchange among providers and deployers of AI systems. However, replicating practices from the repository does not automatically ensure compliance with Article 4, nor does publication imply Commission endorsement or evaluation. This repository forms part of the AI Office's broader efforts to support Article 4 implementation and promote AI literacy and skills, with a dedicated website coming soon.
Analyses
Europe’s tech sovereignty demands more than competitiveness: Marietje Schaake, International Policy Director at Stanford University’s Cyber Policy Center, and Max von Thun, Director of Europe and Transatlantic Partnerships at the Open Markets Institute, published an op-ed in Project Syndicate in which they observed that amid struggles against US tech giants, the EU increasingly emphasises competitiveness. Schaake and von Thun argue that this narrow focus risks entrenching Big Tech's power rather than reducing it, potentially deepening Europe's dependence on US-controlled infrastructure. True tech sovereignty requires moving beyond competitiveness and deregulation toward a more ambitious strategy. Europe's competitiveness anxiety stems from its inability to challenge US tech giants in the market. As the Draghi report notes, the EU-US productivity gap largely reflects Europe's weaker tech sector, prompting European Commission leaders to make competitiveness central to EU tech policy. This singular focus could prove counterproductive. The current deregulatory emphasis, strengthened by the Draghi report, makes EU policymaking vulnerable to corporate lobbying and may benefit established tech giants rather than European startups. Safeguarding Europe from tech coercion would ultimately enhance competitiveness. Strong enforcement of competition law and digital regulations, including the AI Act, could protect citizens whilst creating space for European alternatives to thrive.
EU’s dual strategy of regulation and investment: Jimmy Farrell, the EU AI Policy Co-Lead for Pour Demain, argued in Tech Policy Press that reducing regulation for large general-purpose AI providers under the EU's competitiveness agenda would not help Europe catch up to the US and China, but instead deepen European dependencies on US tech. As the AI Office and independent experts finalise the Code of Practice (CoP) for the AI Act, most rules will apply only to the largest model providers, protecting SMEs and downstream industries. Regulation is not the reason Europe lacks Big Tech companies, as the EU's tech ecosystem had opportunities to emerge before recent regulations. Europe's challenges stem from market fragmentation and poor access to venture capital, among other factors. Deregulation creates legal uncertainty and liability risks for downstream deployers while slowing trusted technology adoption. Weakening the CoP would primarily benefit large US incumbents, entrenching dependency and preventing tech sovereignty. Downstream deployers building applications on Big Tech models represent Europe's opportunity in AI and would benefit from upstream regulation providing legal certainty.
Human oversight requirements in the Act: Wannes Ooms, Lotte Cools, Thomas Gils and Frederic Heymans from the Knowledge Centre Data & Society published a report sharing their prototyping results regarding Article 14 of the AI Act, which mandates that high-risk AI systems enable effective human oversight through both system design and organisational measures. While the article's flexibility allows for contextual adaptation, it creates implementation challenges. Sector-specific guidelines, technical standards and concrete examples are needed to establish clear benchmarks for compliance. Without these, providers lack ways to measure compliance and face legal uncertainty. The report highlights major concerns around insufficient expertise among users performing oversight and limited awareness of human oversight obligations by both providers and deployers. Authorities should take action to develop this expertise and raise awareness. Despite implementation challenges, the human oversight requirements are considered valuable for building trust in AI systems. However, vague terminology hampers understanding and requires clarification. Additionally, determining the proportionality of measures is difficult, potentially leading providers to choose minimal compliance approaches without certainty that these are sufficient.
Excellent update. Clear and extremely relevant . Bravo
Great as always and curious to read the report.