Welcome to the EU AI Act Newsletter, a brief biweekly newsletter by the Future of Life Institute providing you with up-to-date developments and analyses of the proposed EU artificial intelligence law.
Legislative Process
Martin Coulter and Supantha Mukherjee from Reuters reported that the AI Act is facing a delay, after a meeting at which lawmakers failed to reach a consensus on key issues. The main sticking points include the definition of AI and which AI systems should be categorised as high risk. Lawmakers are struggling to find a balance between fundamental rights and innovation. The European Parliament is expected to vote on the AI Act at the end of March, but this delay may push back the timeline.
According to EURACTIV's Luca Bertuzzi, the latest political meetings on the AI Act in the European Parliament have been on the AI definition, scope, prohibited practices, and high-risk categories. The AI definition now being considered is based on the US National Institute of Standards and Technology's definition. This definition clarifies that the "objectives" of the AI model relate to the parameter optimisation process, and not the final purpose of the system. Furthermore, the definition has been moved from the annex to the body of the law, making it unamendable by the European Commission. A new article outlines general principles for all AI systems that do not fall under the high-risk category. These principles include human oversight, technical robustness, compliance with data protection rules, appropriate explainability, non-discrimination and fairness, as well as social and environmental well-being. Compliance with these principles is voluntary, and, if they are adopted, the Commission and AI Office would issue recommendations on how to comply with them.
Analyses
Natasha Lomas from Tech Crunch summarised a report showing how a number of tech giants have been united in lobbying European Union lawmakers not to apply its forthcoming AI rulebook to general purpose AIs (GPAI). The report, by European lobbying transparency group, Corporate Europe Observatory, states that Google and Microsoft among others have been arguing that the AI Act should not apply to the source providers of large language models or other general purpose AIs. Rather they advocate for rules only to be applied downstream, on those that are deploying these sorts of models in ‘risky’ ways. Natasha Lomas warns that this approach, if GPAI model creators end up not facing any hard requirements under the Act, risks causing a constant battle at the decentralised edge where AI is being applied, with responsibility for safety and trust left on users of GPAI. These users will not have the same scale of resources as the model makers to clean up AI-fuelled toxicity.
Andrea Renda and Alex Engler of the Centre for European Policy Studies wrote an explainer about how the AI Act should define artificial intelligence. Renda and Engler begin by explaining that proposing an AI regulation without defining AI would be legally infeasible, but equally, getting the definition wrong would undermine the aim for this regulation to protect fundamental rights and become a global standard. The original proposal had a broad definition of AI, and the European Parliament is currently leaning towards another broad definition similar to the one used by the US's NIST. Meanwhile, the Council, led by the French and Czech presidencies, suggested a much more limited definition. Renda and Engler argue in favour of a broader definition of AI, with a high degree of autonomy given to a dedicated AI Office to tailor the Act’s application to the specificities of algorithms in individual sectors and use cases.
Matt O'Shaughnessy and Matt Sheehan published an article on the Carnegie Endowment for International Peace website reviewing the approaches of AI governance in the European Union and China. According to them, the EU and China are not following a completely horizontal or vertical approach when it comes to regulating AI. The EU's AI Act leans more towards a horizontal approach, whereas China's algorithm regulations tend to be more vertical. O'Shaughnessy and Sheehan believe that the main takeaway from studying these approaches is that neither approach is sufficient on its own. A purely horizontal approach cannot establish meaningfully specific requirements for all AI applications, while a collection of vertical regulations for each new AI application can create compliance difficulties for regulators and companies.
Johannes Walter published an op-ed on EURACTIV arguing that the AI Act wants to use humans to oversee AI-generated decisions, but this should be done only when it is effective. Walter says that there are cases where human oversight can work well, including sometimes evaluating whether large language models like ChatGPT produce answers that make sense. Walter states, however, that there is a lot of evidence that humans often cannot be good supervisors for AI. In an experiment, participants had to solve a simple task and received advice from a supporting algorithm. The algorithm was set up to make poor recommendations, but the participants continued relying on the system even after multiple rounds of playing the game. Walter concludes with three recommendations for the AI Act: 1) human oversight can fail, 2) the feasibility and efficacy of human oversight must be tested, and 3) if human oversight is found to fail, use of the AI system in its current form should be relinquished.
Patrick Grady of the Center for Data Innovation wrote a blog post arguing that the EU is panicking, when it should be carefully considering the benefits and risks of new technologies by considering placing generative AI tools in a “high risk” category in the AI Act. Grady argues that a new proposal would put AI systems that generate complex text in a new high risk category despite their low risk. He adds that plausible concerns such as the spread of misinformation or toxic content should be dealt with in sectorial legislation. He stresses that, for example, it is acceptable for a generative AI system to produce fictitious content for a portion of a novel, but it becomes a concern if it is generating fictitious content for a scientific publication.