Welcome to the EU AI Act Newsletter, a brief biweekly newsletter by the Future of Life Institute providing you with up-to-date developments and analyses of the proposed EU artificial intelligence law.
Legislative Process
According to EURACTIV's Luca Bertuzzi, the AI Act trilogue has made progress, with the less controversial aspects cleared at the technical level. The parts ready for confirmation at the political level are the obligations of providers and users of high-risk systems, conformity assessment bodies, and technical standards. Provisions promoting innovation and the obligation for a fundamental rights impact assessment are yet to be confirmed. Some systems will require third-party conformity assessment by certified auditors, which must be authorised by national authorities. The designation, and its contesting procedures, have been updated. The European Commission's discretion in drafting technical standards has been limited, and consultation with the AI Office and the Advisory Forum is now mandatory. Finally, parliamentarians introduced an obligation for users of high-risk systems to conduct a fundamental rights impact assessment before use, but the Spanish presidency proposes limiting this to public bodies, and making stakeholder consultation merely voluntary.
Analyses
Kai Zenner, the Head of Office and Digital Policy Adviser at the Office of MEP Axel Voss, wrote an op-ed on the OECD website about regulating foundation models in the AI Act. Zenner explains that the proposed Act by the Commission, drafted in 2019-20 when foundation models were not as prominent in AI, lacks explicit coverage for these models. The versatility of foundation models, and thus the potential for various unforeseen purposes make them challenging to fit into the current product safety approach. Additionally, the Act's use case approach, limiting AI systems to certain risk classes, proves too rigid for the latest foundation models capable of diverse tasks. Furthermore, by giving the foundation model an intended purpose, downstream companies would be obliged to comply with the Act, disadvantaging European businesses. However, the European Parliament has taken steps to address these issues by introducing Article 28b, adding a regulatory layer for foundation models. It includes nine obligations for developers, among them risk identification, testing and evaluation, and documentation. Finally, Zenner proposes that a systemic approach be adopted to reduce burdens on smaller providers, targeting only a small number of highly capable and relevant foundation models under the AI Act, similar to how Very Large Online Platforms are designated under the Digital Services Act.
Pegah Maham and Sabrina Küspert, Project Director and Fellow respectively at the Stiftung Neue Verantwortung, wrote a policy brief arguing that the EU has a significant opportunity to lead in responsible AI development through the AI Act and beyond, but to do so, it needs to understand the risks related to general-purpose AI models. Firstly, "Risks from Unreliability" result from the lack of control over AI models' behaviour, leading to issues like discrimination and stereotype reproduction, misinformation, and privacy violations. Secondly, the "Misuse" of these dual-use AI models poses dangers such as cybercrime, biosecurity threats, and politically motivated misuse by malicious actors. Lastly, "Systemic Risks" emerge from the centralisation and rapid integration of AI, leading to economic power concentration and inequality, ideological homogenisation, and disruptions from the lagging societal adaptation.
151 civil society organisations called on EU institutions to ensure that during the trilogue the AI Act puts people and fundamental rights first. Firstly, they urge the institutions to implement a framework for accountability, transparency, accessibility and redress, empowering people affected by AI systems. They state that the proposed Act must include provisions for the conducting and publishing of fundamental rights impact assessments before high-risk AI systems are deployed. Secondly, they recommend imposing limits on harmful and discriminatory surveillance by national security, law enforcement and migration authorities, including banning real-time and post-remote biometric identification in public spaces, and predictive and profiling systems in law enforcement and criminal justice. Finally, they emphasise that the Act should not yield to large tech lobbying efforts. Therefore, it should remove any loopholes that undermine the regulation – such as the additional layer added to the risk classification process in Article 6 – and ensure that providers of general purpose AI systems are subject to a clear set of obligations.
The American Chamber of Commerce to the European Union published a position paper for the AI Act trilogue negotiations. In the paper, they encourage EU decision-makers to align the definition of AI with the OECD's definition and support the deletion of Annex I with AI techniques listed. Their reasoning here is that a targeted and internationally accepted definition of AI would facilitate multilateral coordination in AI policy. They also recommend that the scope of high-risk designation should be narrowed to avoid the vagueness in the Commission's original text. Thirdly, in their view, Chapter III requirements to a large extent align with current practices by responsible AI providers, but should remain flexible and outcome-oriented in light of the evolving nature of AI technologies. Additional recommendations focus on obligations for foundation models and general purpose AI, transparency for artificially generated content, harmonised enforcement, and clarity and flexibility in standards.
A coalition of European creators and right holders in the creative and cultural sectors has called for meaningful transparency obligations on AI systems in the AI Act to ensure the lawful use of copyright-protected content. They state that progress in AI innovation and effective copyright protection are not mutually exclusive because AI systems that rely on using protected materials as input derive their purpose and value from those materials. This coalition recommends that AI systems comply with the EU copyright framework, and developers and deployers keep detailed records of third-party works used, including the basis of access, to enable right holders to enforce their rights. Finally, they see the Parliament's proposal to oblige AI providers to record data used to train AI, including copyrighted material, as a step in the right direction, and urge the support of these provisions in the Act.
Hey Risto Uuk! Great job on the EU AI Act Newsletter #34! As someone deeply involved in regulated and safety-critical technology, I truly appreciate your insights on the AI Act's progress and its impact on foundation models. Looking forward to more informative updates from you in the future! Keep up the good work!