Welcome to the EU AI Act Newsletter, a brief biweekly newsletter by the Future of Life Institute providing you with up-to-date developments and analyses of the proposed EU artificial intelligence law.
Legislative Process
According to EURACTIV's Luca Bertuzzi, on 14 March, the European Parliament co-rapporteurs Dragoș Tudorache and Brando Benifei shared a draft on general purpose AI (GPAI), outlining obligations for providers and responsibilities for different economic actors. GPAI is defined as an “AI system that is trained on broad data at scale, is designed for generality of output, and can be adapted to a wide range of tasks”. The draft clarifies that AI systems developed for limited applications and simple multi-purpose AI systems are not considered GPAI. The co-rapporteurs propose that GPAI providers adhere to some of the requirements initially intended for AI solutions with high-risk potential. This includes ensuring the design, testing, and analysis of GPAI align with risk management requirements to safeguard people's safety, fundamental rights, and EU values. The draft introduces a new article, suggesting that European authorities and the AI Office collaborate with international partners to develop cost-effective guidance and capabilities for measuring and benchmarking the compliance of AI systems.
Analyses
Matija Franklin, Hal Ashton, Rebecca Gorman and Stuart Armstrong wrote an article for the oecd.ai website arguing that the AI Act needs to address critical manipulation methods. They describe that the Act aims to ban AI systems that manipulate people through subliminal techniques or target vulnerable groups in harmful ways. They emphasise that the Act fails to acknowledge how AI systems can alter people's preferences, change behaviours in areas beyond the platforms they interact with, and target and manipulate psychometric vulnerabilities. To address these gaps, they propose the following recommendations: 1) broaden the Act to include non-subliminal techniques that materially distort a person's behaviour; 2) regulate any experimentation that alters behaviour without informed consent; 3) audit AI systems to identify mechanisms that change behaviour and preference; 4) acknowledge that psychometric differences can be seen as vulnerabilities if they are measured and exploited; and 5) add "harm to one's time" and "harm to one's autonomy" to the Act's list of harms.
Zosia Wanat at Sifted wrote that the AI Act aims to establish world-leading regulation, but startups worry it will be a hinderance rather than help. An AI start-up founder states that while regulation is needed to address the significant impact of AI, the law will put a lot of unnecessary bureaucracy on companies. A survey found that most VCs and startups expect the law to reduce European competitiveness and AI development. Some startups may relocate to the US to avoid compliance costs, while sectors accustomed to regulation may fare better. MEP Eva Maydell has proposed regulatory sandboxes in the Act, which could aid startups with deploying AI products faster, while having legal certainty. Not everyone is worrying about the risks. A startup founder from Poland says that their system will likely be classified as low-risk and will only require the disclosure to users that they are interacting with a chatbot. Proponents argue that the law will build trust, bring more clarity, and harmonise practices across Europe. A managing partner at a VC even says that if people do not want to do that, maybe they should move somewhere else.
Ursula Pachl published an op-ed in EUobserver calling the European Parliament to ensure that the AI Act protects people from harmful uses of AI systems which could significantly impact citizens and consumers, and society as a whole. Pachl warns that AI systems may soon make consequential decisions affecting consumers – such as determining consumer prices or access to services – in opaque ways that could enable discrimination. She makes the following recommendations to the Parliament: 1) ban AI systems which carry an unacceptable risk of harm for consumers; 2) make sure that the scope of high-risk AI systems under the law is broader; 3) ensure all other types of AI systems respect certain broad principles such as fairness, transparency and accountability; and 4) only roll out a technology of this complexity and reach when strong rights are guaranteed to the people who will be affected by it, including rights to object, receive an explanation, and seek redress.
On the Center for Democracy & Technology website, Ophélie Stockhem and Claire Fourcans discuss the risks of discriminatory AI hiring systems and advocate for incorporating civil rights standards and principles into the AI Act. Stockhem and Fourcans note that AI systems have already discriminated based on gender and disability; for example, Amazon's automated recruiting system taught itself that male candidates were preferable over women. According to the authors, the EU categorises such recruitment AI as "high-risk" under the proposed AI Act, however, the Act lacks remedies against such discrimination nor is there alignment with EU Equality Law, which mandates an effective right to remedy in case of discrimination in hiring. Stockhem and Fourcans advocate extending complaint and appeal mechanisms to cover providers and users of AI systems, and aligning them with existing EU equality laws. They add that close collaboration between the national supervisory authorities and courts is necessary.
I am so glad that I stumbled upon your Newsletter. Thank you so much for all the work you put into this so important format!
Thanks for your newsletter, I find it very helpful! I'd find it a useful addition if when citing articles/blog posts by experts, you could add the institution they are affiliated with