Welcome to the EU AI Act Newsletter, a brief biweekly newsletter by the Future of Life Institute providing you with up-to-date developments and analyses of the proposed EU artificial intelligence law.
Legislative Process
The European Parliament has adopted its negotiating position on the AI Act, with 499 votes in favour, 28 against, and 93 abstentions. The legislation aims to ensure that AI systems developed and used in Europe abide by EU rights and values, including human oversight, safety, privacy, transparency, non-discrimination, and social and environmental well-being. The Parliament has emphasised several aspects of their position: 1) going for a full ban on AI for biometric surveillance, emotion recognition, and predictive policing; 2) requiring generative AI systems like ChatGPT to disclose that content was AI-generated; and 3) considering AI systems used to influence voters in elections to be high-risk. Co-rapporteurs Brando Benifei and Dragos Tudorache expressed the importance of protecting democracies and freedoms while harnessing AI's potential for creativity and productivity. Negotiations with the Council on the final form of the law have now begun.
Analyses
Stanford University researchers Rishi Bommasani, Kevin Klyman, Daniel Zhang, and Percy Liang have evaluated foundation model providers like OpenAI and Google for their compliance with the European Parliament's version of the AI Act. According to this analysis, major foundation model providers largely do not yet comply with the requirements, but it is feasible for them to start doing so. The researchers write that foundation model providers rarely disclose adequate information regarding the data, compute, and deployment of their models, nor their key characteristics. The article recommends that EU policymakers consider additional critical factors to ensure foundation model providers are adequately transparent and accountable. Policymakers are encouraged to have these requirements apply only to the most influential foundation model providers, to avoid overburdening smaller companies, and make the requisite technical resources and talent available to those agencies tasked with enforcing the AI Act.
Billy Perrigo at TIME wrote that OpenAI has lobbied the EU to weaken the AI Act, even though in public it calls for stronger AI guardrails. Documents obtained by TIME indicate that behind the scenes, OpenAI has pushed for the Act to be watered down in ways that would reduce the regulatory burden on the company. For example, OpenAI argued that the Act should not consider its general purpose AI systems to be "high risk", a designation that would subject them to stringent legal requirements. OpenAI's lobbying efforts appear to have been successful as the final draft of the Act did not contain wording suggesting that general purpose AI systems should be considered inherently high risk. As another example, OpenAI opposed a proposed amendment to the AI Act that would have classified ChatGPT and Dall-E as "high risk" if they generated content that falsely appeared human-generated.
MedTech Europe signed a joint healthcare statement urging member states and EU decision-makers to consider how the proposed AI Act will impact the EU health ecosystem. The signatories make four key recommendations pertinent to healthcare: firstly, that the AI Act aligns with all relevant horizontal and sectoral European laws and concepts; secondly, that it provides more clarity on definitions; thirdly, that it provides a clear data and data governance framework; and finally, that the Act ensures uniform application and implementation of its provisions across member states. The stakeholders welcome the proposal for the establishment of an AI Board and AI Office to provide support to the member states in the implementation and enforcement of the AI Act.
Laura Kayali of POLITICO reported that France's Digital Minister Jean-Noël Barrot has criticised the European Parliament's position on the proposed AI Act, stating that it is too stringent and risks preventing European companies from developing their own foundation models. He gave the example of Google's decision not to launch its chatbot Bard in the EU. Having previously been on the heavy-handed side of regulating Big Tech, France now hopes to foster homegrown companies to compete with the likes of Google and OpenAI. French politicians, including President Emmanuel Macron, have called for a middle ground between regulation and innovation, and the country's former Digital Minister Cédric O has argued that the European Parliament's position on the AI Act would in effect prohibit the emergence of European large language models.
BEUC has raised concerns over the initiatives on generative AI announced as part of the joint EU-US AI voluntary code of conduct and the AI Pact for Europe. BEUC finds the European Commission's negotiations with select businesses in such a voluntary initiative – at the very moment the European Parliament and the Council of Ministers enter the trilogue phase to agree on the proposed AI Act – highly problematic. BEUC recommends that an EU-US AI code negotiation should not be launched before the finalisation of the AI Act, as there is a risk of conflict; in any case, it is unclear what requirements can be agreed upon before the Act passes. In addition, BEUC suggests that instead of relying on voluntary industry commitments, existing EU laws – such as consumer protection, data protection, or product safety legislation – should simply be better enforced. BEUC calls for the participation of civil society in any such undertaking.
The Ada Lovelace Institute hosted an expert roundtable on EU AI standards development and civil society participation. Participants raised concerns about the lack of financial resources to ensure fundamental rights experts could engage meaningfully in the process. Some experts suggested implementing additional financial incentives and mechanisms via the AI Act as a potential solution. Proposed solutions for strengthening the voices of civil society and SMEs included updating voting rights and their weighting, and providing training for civil society and standards development bodies. A centralised mechanism to coordinate and support input on AI standards at key moments or for key questions was also suggested. Participants discussed the challenges posed by large-scale foundation models with general applicability and proposed new approaches such as sandboxes for testing conformity before deployment and regular auditing to ensure a dynamic approach for governing this rapidly evolving technology.
Thank you very much for such an interesting and useful summary of the updated actions related to AI regulation in EU