The EU AI Act Newsletter #65: Free Speech and National Plans
Ireland's Minister of State Dara Calleary has published a list of nine national public authorities responsible for protecting fundamental rights under the EU AI Act.
Welcome to the EU AI Act Newsletter, a brief biweekly newsletter by the Future of Life Institute providing you with up-to-date developments and analyses of the EU artificial intelligence law.
Legislative Process
Ireland publishes list of public authorities: Ireland's Minister of State Dara Calleary has published a list of nine national public authorities responsible for protecting fundamental rights under the AI Act. These authorities will receive additional powers to perform their existing responsibilities when AI poses high risks to fundamental rights, including access to mandatory documentation from AI system developers and deployers. The named authorities include An Coimisiún Toghcháin, Coimisiún na Meán, Data Protection Commission, Environmental Protection Agency, Financial Services & Pensions Ombudsman, Irish Human Rights & Equality Commission, Ombudsman, Ombudsman for Children and Ombudsman for the Defence Forces. The list fulfils Ireland's first obligation under the AI Act.
Italy's approach to implementation: Luca Bertuzzi, Senior AI Correspondent at MLex, reported that the Italian government has adopted a decree to assign its national digital and cybersecurity agencies to enforce the AI Act. Mario Nobile, head of the Agency for Digital Italy, emphasised that preparations began before this official assignment, with the agency building internal capacity and research partnerships. Nobile highlighted more than simply hiring staff, developing complex competencies requires qualified personnel, continuous situation monitoring and dialogue with scientists. The agency has collaborated with 14 Italian scientific professors and several leading universities on the national AI strategy. He stressed the importance of balancing the Act implementation with industrial strategy, given the lack of European "unicorns" and an AI champion. Regarding coordination among regulators, Nobile expressed confidence in the European AI Board as a platform for resolving differences.
Overview of national implementation plans: Since the AI Act took effect on 1 August, Member States have begun preparing for its implementation. The designation of national authorities is among the first priorities. This will be updated as new information becomes available. Please help us ensure this content is complete and accurate by sharing any information you have about the authorities with us.
Analysis
Protecting freedom of expression: Jordi Calvet-Bademunt, Senior Research Fellow at The Future of Free Speech, wrote an op-ed in Tech Policy Press about the implications of systemic risks in the AI Act. The Act requires providers of high-impact general-purpose AI to assess and mitigate systemic risks, similar to the Digital Services Act's requirements for online platforms. While the Act is established, the upcoming General-Purpose AI Code of Practice offers an opportunity to protect freedom of expression. The Act's definition of systemic risk raises concerns about potential impacts on free speech, particularly regarding controversial content. Providers face challenges in balancing various fundamental rights and may tend towards over-removing content to avoid penalties. The European Commission's role as enforcer presents additional concerns, given the potential for political influence and the historical use of "public security" to justify speech restrictions. The DSA's implementation has already demonstrated these challenges, as evidenced by then-commissioner Breton's controversial statements about platform shutdowns during riots in 2023.
The case for comprehensive model evaluations: The think tank Pour Demain wrote a policy brief arguing that the AI Office's forthcoming Codes of Practice for general-purpose AI (GPAI) model deployment should prioritise multi-faceted evaluation approaches beyond black-box testing. The recommendations include 1) encourage 'de facto' white-box access for independent evaluators through custom APIs, 2) facilitate access to contextual information for comprehensive audits, and 3) implement multi-layered safeguards combining technical, physical and legal measures. These recommendations aim to strengthen the EU's ability to assess and mitigate GPAI model risks. The article emphasises the importance of transparency, fairness and robust evaluation methods. It also highlights the limitations of black-box testing and advocates for various access levels, from black-box to white-box and "outside-the-box" evaluations, while suggesting mechanisms to minimise risks associated with comprehensive evaluations.
Can the AI Act enforce fines better than GDPR? Tech Radar staff writer Ellen Jennings-Trace reported on a Dublin ISACA conference, where Dr Valerie Lyons discussed the AI Act's implementation. She suggests companies should not be overly anxious, noting similarities between the AI Act and GDPR's principles of transparency, security and consent. Lyons highlighted issues with GDPR enforcement, revealing that less than 1% of fines have been collected in Ireland due to appeals processes. She noted that government agency fines ultimately cost taxpayers, citing Tusla's €75,000 fine as an example. For smaller businesses deploying AI systems, Lyons makes the following recommendations: 1) conduct gap analysis using ISO or NIST standards, 2) build on existing GDPR compliance, 3) implement AI literacy training before February 2025, 4) update ROPA notices, policies, and DPIAs, and 5) establish robust monitoring processes for AI systems.
Why the Code of Practice matters: Nicolas Moës, the Executive Director at The Future Society, wrote an op-ed arguing why the Code of Practice for general-purpose AI (GPAI) matters. The Code, effective from 2 August 2025, will detail rules for GPAI products and services on the EU market. The Code's significance for Europe includes promoting transparency and understanding about GPAI models, balancing legal certainty with flexibility and supporting responsible AI innovation. Globally, the Code matters because it can establish a responsible approach to GPAI development for non-EU companies, translate legislative obligations into practical measures and indicators, and serve as a blueprint for collaborative co-regulation.
Submission on the establishment of the Scientific Panel: The Irish Council for Civil Liberties (ICCL) has provided feedback on the Draft Implementing Act for establishing an independent AI scientific panel. Their recommendations focus on three key areas. Regarding conflicts of interest, while the draft acknowledges experts must be independent and free from AI provider interests, ICCL suggests explicitly stating that conflicts of interest make candidates ineligible. On public transparency, ICCL recommends setting specific deadlines for the AI Office to process requests from the Panel and publishing decisions and reasoning on a dedicated webpage. Concerning the effectiveness of the Panel, ICCL argues that trade secrets and business confidentiality should not impede the Panel's work, and suggests removing these references from relevant articles, noting that Panel members are bound by professional secrecy.
The debate presented in the article byJordi Calvet-Bademunt from The Future of Free Speech is highly compelling and essential. We need that EU sheds light on how they are delaing with the delicate balance between regulating AI to mitigate systemic risks and safeguarding freedom of expression, a tension that remains crucial to address in today's evolving technological landscape.