Welcome to the EU AI Act Newsletter, a brief biweekly newsletter by the Future of Life Institute providing you with up-to-date developments and analyses of the proposed EU artificial intelligence law.
Analyses
BEUC, European Digital Rights (EDRi), Access Now, and 115 other civil society organisations urged EU lawmakers to address a loophole in the AI Act that could undermine the entire legislation. The signatories argue that this loophole permits AI system developers to decide for themselves if their system is 'high-risk', essentially allowing them to determine if the law applies to them. Originally, the European Commission's draft identified high-risk AI systems based on specific purposes listed in Annex III. However, changes introduced by the Council and European Parliament allow developers to assess their systems' risk level subjectively. The civil society organisations call for the rejection of these changes to Article 6 and the restoration of the Commission's risk-classification process, emphasising the need for an objective, coherent and legally certain process to identify 'high-risk' AI systems in the AI Act.
In July, AlgorithmWatch and the AI, Media & Democracy Lab of the University of Amsterdam co-organised a workshop on general purpose AI (GPAI) and generative AI (GenAI) involving academics and civil society representatives. The key recommendations for EU policymakers published from this workshop were the following: 1) clarify definitions: emphasise the importance of clear, technology-neutral definitions to ensure legal certainty and future-proof regulations; 2) address complexity, scale, and power asymmetries: highlight that GPAI systems involve complex value chains, power imbalances, and extensive impact, requiring special attention to avoid rights violations and societal risks; 3) avoid accountability gaps: recognise the challenge of assigning responsibilities in GPAI systems and the need to ensure affected individuals can enforce their rights; 4) ensure democratic oversight: stress the importance of impact assessments to hold actors accountable, ensure transparency, and protect rights within a democratic framework; and 5) complement other regulations: emphasise that the AI Act should be coordinated with other relevant legal frameworks, including non-discrimination law, digital market regulations, data protection, and sustainability directives.
The Brussels Privacy Hub, along with over 110 academics, published a letter advocating for the inclusion of a fundamental rights impact assessment requirement in the AI Act. They call for the European Parliament's version of the Act to be maintained, and, in particular, the following aspects to be ensured: 1) clear parameters about the assessment of the impact of AI on fundamental rights; 2) transparency about the results of the impact assessment through public meaningful summaries; 3) participation of affected end-users, especially if in a position of vulnerability; and 4) involvement of independent public authorities in the impact assessment process and/or auditing mechanisms.
Matija Franklin, PhD student at the Causal Cognition Lab at UCL; Philip Tomei, Policy Lead at Pax Machina; Rebecca Gorman, Entrepreneur, wrote an article for the oecd.ai website arguing that the EU's latest amendments to the AI Act on manipulation lack both clarity and sufficient scientific support. According to the authors, the central issue is the ambiguity of core concepts, such as 'personality traits’, mentioned multiple times without clear definition. To enhance the Act's effectiveness, the authors suggest adopting a technical definition for personality traits based on best practices in psychology and AI. Furthermore, they propose a more comprehensive definition of ‘subliminal techniques’ to cover hidden attempts to influence individuals' decision-making or beliefs. To define manipulative AI, they suggest considering whether the system acts covertly to intentionally change human behaviour. The Act should also address user preferences, as large AI systems often target them, impacting behaviour. Finally, they suggest definitions for the concepts ‘deception’ and ‘informed decision’.
DOT Europe published a discussion paper focused on generative AI regulations in the European Parliament and Council discussions. It states that while MEPs have included specific requirements for generative AI in their position, the Council has opted for an approach centred around general purpose AI (GPAI), similar to foundation models. The paper included the following recommendations: 1) clarify Article 28 to ensure provisions are workable in practice; 2) reflect in rules the language adopted by the EP on the state of the art; 3) exclude obligation for foundation models on the rule of law, democracy and energy; 4) support the Council's aim to focus on the highest risk uses when regulating GPAI/foundation models; 5) consider wording by the Council on the sharing of information in Article 4(b)(5). Many more recommendations can be found in the paper.
Susanna Lindroos-Hovinheimo, Professor at the Faculty of Law at University of Helsinki, wrote a summary of child-related provisions in the AI Act for The European Law Blog. The author states that while the Act has evolved to consider fundamental rights, it lacks specific provisions for children's protection. In the Commission's original text, there were some child protection elements in articles 5 and 9. Article 5 forbids real-time remote biometric identification in public spaces for law enforcement, except for specific cases like searching for missing children. Article 9 focuses on risk management systems, considering the impact on children when implementing high-risk AI systems. The Council's general approach did not go any further. The Parliament introduced fundamental rights impact assessments and indirectly included children as a vulnerable group.
P.S. We have developed a new tool at FLI to help European SMEs and startups better understand whether they might have any legal obligations under the EU AI Act or whether they may implement the Act solely to make their business stand out as more trustworthy. Please note that the Act is still in negotiations, and our tool is a simplification. There are three positions with different proposals for the Act currently available and we were selective to make the tool more user-friendly. This tool can help give an indication about what obligations your system might face. This tool is still a work in progress. Please send your feedback to risto@futureoflife.org.