Welcome to the EU AI Act Newsletter, a brief biweekly newsletter by the Future of Life Institute providing you with up-to-date developments and analyses of the proposed EU artificial intelligence law.
Legislative Process
Foo Yun Chee and Supantha Mukherjee from Reuters reported that Brando Benifei, one of the lawmakers responsible for AI Act negotiations, called on EU member states to compromise on crucial issues in order to secure an agreement by the end of the year. With two more rounds of discussions scheduled next month, Benifei stressed the need for greater flexibility among EU countries. Some of the most contentious issues revolve around biometric surveillance and the use of copyrighted material by AI, including models like ChatGPT. Lawmakers aim to prohibit AI usage in biometric surveillance, but several EU countries, led by France, seek exceptions for national security and military purposes. Additionally, legislators want AI regulations to encompass copyrighted content used by companies like OpenAI. In contrast, EU member states argue that existing copyright rules within the bloc provide sufficient protection. An advisor at the European Commission stated that biometric surveillance could go "down to the wire." Some voices, such as that of Svenja Hahn, advocate for banning biometric facial surveillance within the AI Act and addressing copyright concerns through copyright law, aligning with EU countries on this matter.
Analyses
Hadrien Pouget, Associate Fellow at the Carnegie Endowment for International Peace, and Johann Laux, British Academy Postdoctoral Fellow at University of Oxford, wrote an article discussing the pivotal role of the AI Office within the EU's regulatory ecosystem, focusing on its responsibilities in the implementation of the AI Act. The authors recommend that the AI Office advise and coordinate decision-makers in three key areas: developing harmonised standards, amending the AI Act, and handling legal matters in courts. For standards, they recommend that the AI Office contributes to increasing specificity over time, helps determine normative decisions for consideration, and provides insights when normative judgments need to be made. On amendments, the Office should monitor complex relationships between the AI Act and other laws and maintain independence when making recommendations for delegated acts. On court proceedings, the Office can help to identify gaps in the protection of fundamental rights, and provide insights into which harms are predictable and which are not, by monitoring technological evidence.
Eva Simon, Advocacy Lead for Tech & Rights, and Jonathan Day, Communications Manager at the Civil Liberties Union For Europe, wrote an op-ed in Euronews arguing that in order to protect our rights, the AI Act must include rule of law safeguards. Simon and Day explain that the Act's significance lies not only in human rights safeguards but also in establishing a vital connection with the rule of law. The rule of law, a cornerstone of the EU, encompasses values like transparent lawmaking, the separation of powers, impartial courts, and non-discrimination. The authors argue that mandatory fundamental rights impact assessments are vital for ensuring justice, accountability, and fairness in AI deployment, and propose that rule of law standards should be integrated into these assessments – including risk evaluation, mitigation strategies, and regular reviews. As an example risk they highlight that in the upcoming elections in both Poland and the European Parliament, AI's potential to target individuals with personalised messages and disinformation poses a significant threat to fair elections.
Catelijne Muller and Maria Rebrean from ALLAI described in a new article that the European Parliament's negotiating position on the AI Act recommends exempting open-source AI components from the Act's scope, so long as they are not part of prohibited or high/medium-risk AI systems, with the exception of foundation models. The authors caution against a blanket exemption for open source AI for several reasons; amongs them, that no clear definition of ‘open-source’ exists, making the issue difficult for regulation, and that unregulated open-source AI components could lead to biased or unsafe AI systems. They suggest that if lawmakers do consider an exemption, they should clearly define open-source AI, carefully assess each of the Act’s obligations on open source AI components to determine which can be met in an acceptable and feasible manner, and add a mechanism to intervene if the exemption leads to acute or great risks. Additionally, Muller and Rebrean recommend that open source AI components should adhere to general AI principles and comply with the prohibited practices provision.
The Joint Research Centre at the European Commission wrote a policy report about the cybersecurity requirement for high-risk AI systems in Article 15 in the AI Act. The report acknowledges limitations in securing AI models but suggests that AI systems can achieve compliance by addressing cybersecurity risks through measures beyond the AI model level. However, for high-risk AI systems using emerging AI technologies, achieving compliance may require introducing new cybersecurity controls and mitigation measures. To ensure compliance, the cybersecurity of everything from the system to its individual components should be mapped out as described in the risk management framework of Article 9. This involves assessing AI models within the context of their interactions with non-AI components. The report emphasises that not all AI technologies will be ready for deployment in high-risk scenarios if their cybersecurity limitations are inadequately addressed.
European Network of National Human Rights Institutions (ENNHRI) published a position on the AI Act outlining key recommendations to enhance fundamental rights protection in the era of AI. They advocate for objective classification for high-risk AI systems and minimal exceptions for bans. They stress that obligations should not be limited to AI providers and should extend beyond EU borders for international cooperation. They also call for a robust framework covering foundation models, general and dual purpose AI, as well as regulation for research activities. Additionally, ENNHRI emphasises effective oversight in collaboration with national human rights institutions; improved transparency, accountability, and redress mechanisms for AI systems; while discouraging harmful and discriminatory surveillance by national security, law enforcement, and migration authorities.
Thank you for so interesting info!