The EU AI Act Newsletter #80: Commission Seeks Experts for AI Scientific Panel
The European Commission is establishing a scientific panel of independent experts to aid in implementing and enforcing the AI Act.
Welcome to the EU AI Act Newsletter, a brief biweekly newsletter by the Future of Life Institute providing you with up-to-date developments and analyses of the EU artificial intelligence law.
Legislative Process
Commission seeks experts for AI Scientific Panel: The European Commission is establishing a scientific panel of independent experts to aid in implementing and enforcing the AI Act. The panel's mandate centres on general-purpose AI (GPAI) models and systems. It will advise the EU AI Office and national authorities on systemic risks, model classification, evaluation methods, and cross-border market surveillance, as well as alerting the AI Office to emerging risks. The Commission seeks 60 members for a two-year renewable term. Candidates must have expertise in GPAI, AI impacts, or related fields including model evaluation, risk assessment and mitigation, cybersecurity, systemic risks, and compute measurements. A PhD or equivalent experience is required, and experts must maintain independence from AI providers. The selection will ensure gender balance and representation across EU and EEA/EFTA countries. Whilst EU citizenship is not mandatory, 80% of experts must be from EU or EFTA member states. Applications close 14 September. The Future of Life Institute has also written a blog post outlining reasons why experts should join the panel.
The EU AI Office 1st anniversary: The EU AI Office is marking its first anniversary, having grown to over 100 experts across AI policy, research and innovation, healthcare, regulation, and international cooperation. One of their achievements has been implementing the AI Act, creating practical guidance, tools, and governance structures. Key accomplishments include issuing guidelines on AI system definition and prohibitions, establishing an AI literacy repository, and working with the AI Board of Member States. The Scientific Panel is being established, with an Advisory Forum to follow. Upcoming initiatives include the imminent publication of a Code of Practice on general-purpose AI (GPAI), developed with input from over 1,000 experts, to be assessed by August 2025. Guidelines clarifying GPAI concepts are in development, and a public consultation on high-risk AI systems is currently open. The office is actively involved in standardisation efforts and plans to launch an AI Act Service Desk to provide guidance to developers, deployers, and authorities.
Analyses
Letter calling for GPAI rules to serve the interests of European businesses and citizens: A coalition of AI researchers, representatives from civil society, industry, and academia, including Nobel laureates Daron Acemoglu and Geoffrey Hinton, has written to the European Commission President. Their letter urges EU leaders to resist pressure from those attacking the rules on general-purpose AI (GPAI). The coalition argues that the EU can demonstrate its ability to provide industry innovation-friendly tools, such as the Code of Practice, without compromising on health, safety, and fundamental rights. The Code of Practice, developed over nine months with extensive stakeholder input, facilitates the fulfilment of GPAI obligations. It primarily affects 5-15 large companies and aligns with existing risk management practices. The coalition recommends three elements for a future-proof GPAI governance: 1) mandatory third-party testing for systemic risk models in the Code of Practice to ensure effective safeguards and foster trust; 2) robust review mechanisms that can swiftly adapt to emerging risks and safety practices, including emergency updates for imminent threats; and 3) strengthening the AI Office’s enforcement capabilities by expanding the AI Safety unit to 100 staff and the implementation team to 200, while recruiting leading AI safety experts. The letter was first shared by the Financial Times.
The US is wrestling with an increasingly complex regulatory environment: Bella Zielinski and Jacob Wulff Wold from Euractiv argued that the US faces a complex regulatory landscape despite criticising the EU's tech regulations. In 2024, states introduced nearly 700 AI-related bills, with 113 becoming law, and hundreds more introduced in 2025. States like Colorado and Texas adopted comprehensive approaches similar to the EU AI Act, while California passed targeted legislation on specific issues like deepfakes and digital replicas. Meta has complained to the White House about an "unworkable regulatory environment" with contradictory standards exceeding EU restrictions. However, this state-level patchwork could benefit EU enforcement of its AI Act against US companies. Trump's administration has shifted focus towards AI as a geopolitical tool, revoking Biden's executive order on AI risks and planning to enhance America's global AI dominance. This hostile political environment may discourage further state-level AI legislation. Major tech companies are lobbying for federal regulation to supersede state laws. The outcome - either a state patchwork or deregulation - could impact the EU's regulatory approach.
Kazakhstan’s new AI law inspired by the EU: According to Euractiv's Xhoi Zajmi, Kazakhstan is striving to become Central Asia's first nation to comprehensively regulate AI, drawing inspiration from the EU's AI Act. The country's draft 'Law on Artificial Intelligence', which received initial approval from Kazakhstan's lower parliamentary chamber in May, demonstrates its commitment to human-centric AI regulation. According to Shoplan Saimova from the Institute of Parliamentarism, the EU AI Act serves as a model, and Kazakhstan aims to lead rather than follow, developing a framework aligned with national priorities that builds trust between humans and AI systems while protecting public interests. The legislation, developed through extensive stakeholder consultation, seeks to govern AI across society. However, a recent academic analysis by Kazakh scholars identifies four main shortcomings when compared to the EU framework: the absence of a clear risk classification system, inadequate algorithmic transparency requirements, limited personal data protection measures, and insufficient institutions for enforcement.
Generative AI outlook in the EU: The European Commission's Joint Research Centre published a report examining generative AI's (GenAI) impact within the EU, focusing on innovation, productivity, and societal change. Under the AI Act, many GenAI systems fall into the "limited risk" category, requiring providers to ensure users know they are interacting with machines and that AI-generated content is identifiable. This includes deep fakes and public interest content. GenAI systems can also be part of high-risk or unacceptable risk applications. While not explicitly mentioned in high-risk use cases, GenAI could be integrated into them. Prohibited practices include harmful AI-based manipulation and deception, with examples such as chatbots impersonating relatives or systems designed to hide undesired behaviour during evaluation. As many current GenAI models exhibit general-purpose AI capabilities, they face the relevant obligations. The Commission's AI Office is developing a Code of Practice to detail these rules based on state-of-the-art practices.