Welcome to the EU AI Act Newsletter, a brief biweekly newsletter by the Future of Life Institute providing you with up-to-date developments and analyses of the proposed EU artificial intelligence law.
Legislative Process
According to Euractiv's Luca Bertuzzi, MEPs negotiating the AI Act stand by tighter regulations for powerful AI models, like OpenAI's GPT-4. There was previously consensus on a tiered approach with broad obligations for all foundation models and additional requirements for those posing systemic risks. France, Germany, and Italy then decided to disagree and oppose obligations on foundation models broadly. The European Parliament insists on obligations for developers of the most powerful models, introducing a working paper with binding requirements. These include internal evaluation and testing, cybersecurity measures, technical documentation, and energy-efficiency standards. The obligations would apply solely to original developers of models with systemic risk such as OpenAI and Anthropic, not the downstream developers that refine the model. The AI Office would oversee compliance and impose sanctions for breaches. The parliamentarians accept the idea of EU codes of practice but only to complement the horizontal transparency requirements for all foundation models. Criteria for designating models with systemic risk include capabilities, number of users, financial investment, modalities, and release strategies, rejecting a single quantitative threshold proposed by the Commission.
The European Commission has introduced the AI Pact, encouraging companies to voluntarily commit to implementing measures outlined in the AI Act before the legal deadline. Some AI Act provisions will take effect shortly after adoption, while others, particularly for high-risk AI systems, will apply after a transitional period. The AI Pact seeks early industry commitment to anticipate and implement AI Act requirements, addressing concerns about the rapid adoption of generative and general-purpose AI systems. Companies can pledge to work toward compliance, outlining the processes and practices they are planning or already putting in place. The Commission will collect and publish these pledges, fostering transparency and credibility. The AI Pact aims to create a community of key EU and non-EU industry players to exchange best practices with the aim of increasing awareness of the future AI Act principles. Interested organisations can now express their interest in participating, with the formal launch expected after the Act's adoption.
Analyses
Bram Vranken, Researcher and Campaigner at the Corporate Europe Observatory, published an op-ed on Social Europe about how Big Tech companies are using intense lobbying efforts to derail the AI Act, pushing for advanced AI systems, known as 'foundation models', to go on unregulated. Vranken argues that tech corporations like Google and Microsoft investing billions in partnerships with startups contributes to a near-monopoly. The Parliament aimed to impose obligations on companies developing foundation models, such as mitigating risks to fundamental rights, checking the quality of the data used to train these AI systems against any biases, and lowering their environmental impact. However, behind closed doors, tech firms have resisted such regulations – despite publicly calling for AI regulation. Lobbying by Big Tech has increased; this year, 66% of AI-related meetings involving Parliament members were with corporate interests. CEOs of Google, OpenAI, and Microsoft have engaged with high-level EU policymakers, and 86% of high-level Commission officials' meetings on AI have been with industry. AI startup Mistral AI has joined the lobbying campaign, with the former French Secretary of State for Digital Transition, Cédric O, in charge of EU relations. French, German, and Italian officials also met tech industry representatives to discuss cooperation on AI, after which they started echoing Big Tech's push for innovation-friendly regulations.
Rishi Bommasani, the Society Lead at the Stanford Center for Research on Foundation Models, wrote an overview of possible approaches to categorise different foundation models. Several governments, including the US and the EU, are contemplating tiered regulations for foundation models, taking into account the impact and potential harm they may cause. Bommasani emphasises that tiers should be determined by demonstrated impact, with scrutiny increasing for models that have a greater societal impact or pose more significant risks. However, the measurement of impact is challenging, as foundation models are not directly used by the public. Bommasani suggests two potential routes forward: tracking applications dependent on a given foundation model and counting the aggregate number of users across downstream applications. He also raises the possibility of hybrid approaches – which the Parliament has recently considered – of integrating different tiering strategies for more robust regulation.
Computer & Communications Industry Association published an explainer of foundation models. The text defines 'AI foundation models' as models trained on broad data with self-supervision capabilities, enabling adaptation to various downstream tasks. These models, a subset of general-purpose AI, power numerous applications like text generation, accessibility, innovation, education, data analysis, research, and automation. Prominent examples include OpenAI's GPT-3.5 and GPT-4, Google's PaLM2, Meta's Llama2, and Amazon's Titan. The rapid deployment of such tools has prompted debates on AI policy, with a consensus on addressing risks like bias, safety, cybersecurity, and privacy. The text suggests that rules for foundation models should be technology-neutral, focus on high-risk uses, maintain exemptions for developers, have balanced and implementable rules, avoid unnecessary copyright requirements, streamline responsibilities along the value chain, and establish a fair implementation timeline under the AI Act.
Equinet and ENNHRI jointly issued a statement urging policymakers to enhance protection for equality and fundamental rights within the AI Act. Their recommendations include ensuring a robust enforcement and governance framework for foundation models and high-impact foundation models, incorporating mandatory independent risk assessments, fundamental rights expertise, and stronger oversight. They also emphasise legal protection for high-risk systems, effective collaboration between the AI Office, national supervisory authorities and independent public enforcement mechanisms, and a redress mechanism for AI-enabled discrimination victims. The statement advocates for mandatory fundamental rights impact assessments for AI system deployers, a ban on biometric and surveillance practices that pose unacceptable risks to equality and human rights, and the prohibition of predictive policing for criminal and administrative offences due to its potential to embed structural biases and over-police certain groups of people.
Thank you for your work!
Thanks for sharing all the info to everyone