The EU AI Act Newsletter #49: Brussels Goes Global
What can large countries like India and the US learn from the EU's approach to regulating AI?
Welcome to the EU AI Act Newsletter, a brief biweekly newsletter by the Future of Life Institute providing you with up-to-date developments and analyses of the proposed EU artificial intelligence law.
Analyses
India paying attention to the AI Act: Shweta Bharti, the Managing Partner of Hammurabi & Solomon Partners, and Pranshu Singh, an Associate at the firm, wrote an op-ed comparing India’s approach with the EU's Act. Bharti and Singh state that the EU's law has global implications, including for India, which is already working on its own responsible AI framework. The 'Brussels Effect' has long referred to the way the EU's regulations can influence standards of nations beyond Europe, impacting trade and technology cooperation – and India is no exception. Indian companies serving EU clients may face compliance burdens and additional impacts from the risk assessment of AI models. Though the exact impact remains uncertain, the Act raises the following possibilities: 1) due to the vast market of the EU, India may consider aligning some of its AI policies to ensure trade is not hindered; 2) this alignment may be an opportunity for collaboration on responsible development and deployment of AI, 3) it may facilitate data exchange and collaboration on AI; and 4) policy learnings from EU implementations offer avenues for India to navigate effective AI regulation.
What about general-purpose AI? Thanos Rammos and Richard Gläser, Partner and Associate at the law firm Taylor Wessing, respectively, summarised the general-purpose AI (GPAI) provisions in the AI Act. These models, capable of various distinct tasks, will be classified into standard, openly licensed, and those with systemic risks. Providers of GPAI, and likely also those modifying existing models, must meet detailed documentation requirements, enable downstream users to understand the capabilities and limitations of the model, and draw up and make publicly available a summary about the content used for training. Additionally, those posing systemic risks need to implement cybersecurity measures, conduct model evaluations, assess and mitigate risks, and document and report incidents. Enforcement will be overseen by the new AI Office, with fines for non-compliance of up to 3% of total worldwide turnover. There are other relevant laws which relate to GPAI, too, including GDPR for privacy, copyright laws for data mining exceptions, and IP laws for conflicts with training and output generation. The Act's GPAI rules come into effect 12 months after adoption.
What the US can learn: In City Journal, Senior Economist at the Foundation for American Innovation Samuel Hammond cautioned the US against emulating the EU's AI Act, on the grounds that its risk-based approach for AI deployment focuses excessively on equity issues rather than catastrophic AI risks. Hammond notes that the Act imposes stringent obligations on high-risk AI systems, including premarket approval in some cases, and poses legal risk from noncompliance. He commends the special scrutiny for developers of general-purpose AI as the most reasonable and well-targeted provision of the Act. However, in his view the Act's unrealistic demands and potential fines may deter US developers from releasing the latest models in the EU at all. The author claims that a smarter approach would focus only on truly catastrophic risks and oversee advanced AI labs, delaying broader regulation until clearer understanding emerges.
What the US can learn (continued): Maria Villegas Bravo, EPIC Law Fellow, wrote an article evaluating the AI Act's strengths and weaknesses, categorising them into "The Good, The Bad, and The Ugly". In "The Good" section, Bravo praises the Act's prohibition on various intrusive AI applications and its recognition of the risks posed by certain algorithms to fundamental rights. Meanwhile, "The Bad" highlights the challenges in regulating general-purpose AI (GPAI) models and open-source software, leading to compromised provisions. Finally, "The Ugly" underscores flaws in the handling of biometric identification systems, particularly the allowance of carveouts for law enforcement use. In light of this, Bravo suggests that the US should adopt a different approach, avoiding a harms-based structure due to its lack of a strong human rights framework. Instead, she advocates for comprehensive privacy legislation to underpin effective AI regulation, emphasising the importance of privacy laws as the foundation for any future US AI legislation.
Best practices for deepfakes and chatbot transparency: Researcher Thomas Gils at The Knowledge Centre Data & Society wrote about a project they ran to explore deepfake and chatbot transparency requirements under the AI Act. The Act acknowledges transparency to build trust and accountability, outlining requirements for high-risk AI systems, certain AI systems and general-purpose AI models. Researchers tested these requirements by performing a mock compliance exercise with stakeholders and collecting their feedback. For chatbots and deepfakes, three general best practices emerged: 1) make disclaimers accessible to diverse audiences, including those with disabilities, by using various modes of communication such as written, visual, and oral; 2) provide the appropriate amount of information, maintaining proportionality and balance to avoid overwhelming users; 3) adapt disclaimers to the intended and potential target audience to address accessibility and information needs effectively.
Reflections from various stakeholders: Tech Reporter Pascale Davies at Euronews reported on the perspectives of tech experts from different sectors on the Act's passing in the European Parliament. Max von Thun from the Open Markets Institute commends Brussels for its initiative but highlights loopholes and weak regulation of the largest foundation models as well as the Act's failure to address the power of dominant tech firms. Elsewhere, Alex Combessie of Giskard welcomes the Act, confident in their ability to enforce effective checks and balances. Katharina Zügel of the Forum on Information and Democracy advocates for stricter rules, particularly for AI systems in the information space, to safeguard fundamental rights. Julie Linn Teigland of EY emphasises the importance of private sector involvement for Europe's competitiveness but underscores the need for businesses to prepare for compliance. Marianne Tordeux Bitker of France Digitale expresses concerns about excessive regulations hindering European AI competitiveness. Finally, Risto Uuk (me!) at the Future of Life Institute stresses the importance of adequate resources for the AI Office and civil society involvement in codes of practices for general-purpose AI.