The EU AI Act Newsletter #40: Special Edition on Foundation Models & General Purpose AI
08/11/23-15/11/23
Welcome to the EU AI Act Newsletter, a brief biweekly newsletter by the Future of Life Institute providing you with up-to-date developments and analyses of the proposed EU artificial intelligence law.
Legislative Process
According to Euractiv's Luca Bertuzzi, the whole AI Act may be in jeopardy. On Friday, 10 November, negotiations broke down as larger member countries sought to retract the proposed approach for foundation models. The dispute revolves around how to regulate AI models like OpenAI's GPT-4, which powers the popular ChatGPT. A consensus emerged in the previous trilogue to implement tiered rules for these models, emphasising stricter regulations for the most impactful ones currently developed by non-European companies. However, opposition from major European countries, notably France, Germany, and Italy, has since intensified. Both French AI startup Mistral, led by former digital state secretary Cedric O, as well as Germany's Aleph Alpha, closely connected to the German establishment, fear and oppose regulation. Unless it is resolved soon, this deadlock poses a risk to the entire AI legislation. Here is an informative X thread on this:
As a reminder for everyone who has not followed the AI Act development process from the start, here are the key stages with regards to the regulation of general purpose AI systems (GPAIS) and foundation models. In April 2021, the European Commission's original draft of the Act did not mention GPAIS or foundation models. In August 2021, the Future of Life Institute (where I work) and a handful of other stakeholders provided feedback to the Commission that the draft did not address increasingly general AI systems such as GPT-3 (the state of the art back then). In November 2021, the Council led by Slovenia introduced an Article 52a dedicated to GPAIS, stating that GPAIS shall not by themselves only be subject to the regulation. In March 2022, the JURI committee in the European Parliament essentially copied these same provisions into their position. In May 2022, the Council, then led by France, substantially modified the provisions for GPAIS by requiring such systems, which may be used as high risk AI systems or as components of high risk systems, to comply with select requirements. In November 2022, a Czechia-led Council adopted their position which stated that GPAIS which may be used as high risk AI systems or as components of high risk AI systems shall comply with all of the requirements established in the chapter on requirements for high-risk AI systems. In June 2023, the Parliament adopted their position and introduced Article 28b, with obligations for the providers of foundation models regardless of how they are distributed.
In more Euractiv coverage from 9 November, the AI Act was making progress with proposed criteria for identifying powerful foundation models. The Spanish presidency circulated a draft on 7 November offering obligations for foundation models, including the most powerful or 'high-impact' ones. Leading MEPs suggested initial criteria for determining the most impactful models, including data sample size, model parameter size, computing resources, and performance benchmarks. They advised that the Commission should develop a methodology to assess these thresholds and adjust them when the technological development warrants. Suggested obligations for high-impact models included registration in the EU public database and assessing systemic risks. MEPs advocated for the AI Office to publish yearly reports on recurring risks, best practices for risk mitigation, and a breakdown of systemic risks.
On 7 November, Euractiv stated that foundation model governance in the EU’s AI law is starting to take shape. The Spanish presidency put forth a governance architecture for overseeing obligations on foundation models and high-impact foundation models. The Commission, via implementing acts, would define procedures for monitoring foundation model providers, outlining the AI Office's role, appointing a scientific panel, and conducting audits. Audits may be performed by the Commission, independent auditors, or vetted red-teamers with API access to the model. The proposed governance framework includes the AI Office and a scientific panel for regular consultations with the scientific community, civil society, and developers. The panel's tasks encompass contributing to evaluation methodologies, advising on high-impact models, and monitoring safety risks. In cases of non-compliant AI systems posing significant EU-level risks, the Commission can conduct emergency evaluations and impose corrective measures.
Analyses
Two doctoral students from Germany, Anton Leicht and Dominik Hermle, argued in a blog post that the recent criticism about the EU AI Act's foundation model regulation on economic grounds is misleading, as a strong regulatory focus on foundation models would be highly economically beneficial to the EU. Leicht and Hermle state that the foundation model regulation is criticised – notably by France and Germany – for potentially impeding European foundation model development. Concerns center on economic competitiveness against global AI leaders like OpenAI, Google, and Meta. While EU providers Aleph Alpha and MistralAI have lately secured investments in the hundreds of millions, they trail counterparts like GPT-3.5 and GPT-4 in both performance and applications. Aleph Alpha's and Mistral's best models perform at the level of GPT-3 and Meta's LlaMa-2 weakest version. Considering their lack of computational resources, funding, data and talent, these providers are judged to be several years of development behind the global leaders, with minimal chance of catching up. However, foregoing comprehensive foundation model regulation risks burdening the potentially vast market of downstream deployers, leading to economic peril and increased compliance costs for them.
Natasha Lomas at Tech Crunch wrote that the AI Act negotiations are at a critical stage. Talks, described as "complicated" and "difficult" by MEP Brando Benifei, are particularly contentious regarding the regulation of generative AI and foundation models. Heavy industry lobbying, especially by French startup Mistral AI and German firm Aleph Alpha, has resulted in French and German opposition to MEPs' proposals for foundation model regulation. Lobbycontrol, an EU and German lobby transparency nonprofit, accuses Big Tech of lobbying for a laissez-faire approach, undermining AI Act safeguards. Mistral CEO Arthur Mensch denies blocking regulations but emphasises that regulations should focus on applications, not infrastructure. The outcome remains uncertain, with the risk of an impasse if member states resist accountability for upstream AI model makers.
Can't wait until AI takes half of all jobs so I can collect my UBI check and focus on my art!
Fascinating reading; thank you for writing these newsletters!
My impression is that there are two distinct definitions of the term "AI". On one hand, "AI" refers to earlier-generation algorithmic and machine learning systems branded "AI" largely for marketing purposes. Such systems are relatively mature, can significantly impact our lives, and clearly require thoughtful regulation. Even earlier drafts of the AI act provided that!
On the other hand, the term "AI" now also refers to a quite different set of powerful and rapidly-evolving technologies, including foundation models, derivatives, and probably incarnations that we can not yet imagine. Unlike the technology previously branded "AI", this new crop looks like genuine machine intelligence or its immediate predecessor. While also perhaps worthy of regulation, this technology is likely to continue undergoing radical evolution. As a result, attempting to write an enduring law to cover it seems doomed to fail; specifically, what "it" is could change abruptly, making a regulation obsolete or nonsensical. As an analogy, imagine trying to write the definitive computer regulation in the late 1980s only to see the world wide web appear!
So regulating older technologies called "AI" is actually quite important to society, and drafts of the Act do that. Regulating newer technologies is ALSO entirely worthy of consideration, but more challenging. The critical point is that the need for the latter does not diminish the need for the former. Yes, GPAIS (as you call them) are indeed flourishing, but they are not replacing many of the earlier-generation decisions systems branded "AI", which continue to affect our lives.
Given this, I think the EU would do well to promptly pass regulation covering systems other than GPAIS. They were on a great track and should wrap it up. Separately and in parallel, the EU should take up regulation of GPAIS with no illusion that that will be a one-time effort.
In short, I'm afraid that the aesthetic desire to produce a single, unified "AI act for the ages" could leave us with unnecessary delays or no act at all. I find this very frustrating!