The EU AI Act Newsletter #52: Free Tools to Navigate the Final Text
Navigate the AI Act final text with free new search, chat and contents tools.
Welcome to the EU AI Act Newsletter, a brief biweekly newsletter by the Future of Life Institute providing you with up-to-date developments and analyses of the proposed EU artificial intelligence law.
Legislative Process
The AI Pact has kicked off: The AI Pact, an initiative by the European Commission, urges organisations to prepare for the AI Act's implementation. While some provisions of the Act will take effect shortly after its adoption, others apply after a transitional period. The AI Pact encourages early compliance with the Act's requirements. Pillar I focuses on community engagement and sharing best practices. Workshops by the AI Office and collaborative efforts aim to help participants understand their responsibilities and prepare for implementation. Pillar II enables companies to make voluntary commitments towards transparency and meet high-risk requirements, with pledges detailing concrete actions and timelines. Organisations will be asked to report regularly on their commitments, which are published to enhance accountability and confidence in AI technologies. The Commission supports participants in understanding the AI Act's objectives, adapting processes, and sharing knowledge. The Pact allows ambitious participants to test and share their solutions.
Analyses
Many AI Act tools being developed: The AI Act Explorer developed here at the Future of Life Institute has now been updated with content from the European Parliament's 'Corrigendum' version from 19 April 2024. The content of the Act is unlikely to change any further. Many other tools are being developed by a variety of stakeholders. We want to bring attention to a useful tool called CLaiRK developed by Digital Policy Alert. This tool is similar to our AI Act Explorer, but it has some additional features, including the CLaiRK Chat where you can ask questions about the Act. For example, you could ask it, “What does the AI Act say about general purpose AI?” The response is informative. Another one worth checking out is the final text of the AI Act with an interactive table of contents for easy navigation. This was put together by the Chair on the Legal and Regulatory Implications of Artificial Intelligence at MIAI Grenoble Alpes. Finally, there are some useful flowcharts explaining the legal compliance nuances of the AI Act, including one by Burges Salmon and another one by appliedAI Institute.
Reflections on the capital market and the AI Act: ScienceBusiness journalist David Matthews interviewed Dragoș Tudorache, the outgoing MEP responsible for the AI Act. Tudorache emphasises the need for a European capital markets union to foster innovation in AI. He argues that without easier access to capital, European AI companies will continue to rely on partnerships or selling out to US tech giants. Here he cites examples like Mistral AI and DeepMind, which partnered with or were acquired by US companies due to lack of access to resources. While some suggest public funding for AI, Tudorache believes private funding is crucial for industry competitiveness. He highlights the importance of the AI Act, despite concerns about bureaucratic burden, and suggests that much of AI is not touched by this regulation.
Extraterritorial scope may be broader than realised: Tim Hickman, Partner, and Thomas Harper, Associate, at White & Case LLP, explored the impact of the AI Act on the data processing activities of organisations based outside of the EU. Hickman and Harper state that the Act extends its reach beyond EU borders, impacting organisations worldwide. Its extraterritorial application means non-EU entities can fall under its scope if their AI outputs are used in the EU, regardless of intent and even if the provider/deployer has made no attempt to aim its activities at the EU, according to Article 2(1)(c). For instance, a UK-based company supplying AI-generated content to a Moroccan client could be subject to the AI Act if the content is later used in the EU. Even if the provider takes precautions, like contractual prohibitions, they could still be in scope if their client breaches those agreements. This expansive reach poses challenges for non-EU organisations to assess their compliance obligations under the Act, highlighting the need for further clarification.
Will the AI Act work? Laura Caroli, Senior Policy Advisor at European, wrote an op-ed for the IAPP website discussing whether the Act will be effective, considering lessons from past legislative initiatives. According to Caroli, the Act draws from product safety laws but is really a hybrid legislation aiming to balance technical safety with fundamental rights protection. Critics argue that AI's complexity defies simple technical requirement-based regulation that work for, say, toasters or washing machines. The Act introduced measures like fundamental rights impact assessment to address these concerns. In comparison to a toy or lift, flexibility is crucial for AI as it is not a static product. Because of that, the Act leaves room to update most of the annexes, thresholds and other criteria for general-purpose AI models with systemic risk, and relies on standards and guidelines. Finally, Caroli argues that the enforcement is less likely to be prone to problems than GDPR, because it involves national market surveillance authorities, who intervene where infringement takes place, whereas GDPR targets where the provider is established, which has created major bottlenecks such as Ireland with its non-EU tech companies.
Views on the implications of the Act on businesses: Freelance writer Andrada Fiscutean published a variety of stakeholder views about the Act's impact on businesses on the CIO website. Commissioner Thierry Breton sees it as a launchpad for EU startups to lead the global race for trustworthy AI. Co-rapporteur Tudorache emphasises that the vast majority of AI would not be touched, but that it is important for those who are affected to start preparing. Privacy lawyer Tim Wybitul expects the Act to have significant impact, possibly greater than GDPR. Danielle Jacobs, CEO of Beltug, discusses challenges and actions among Belgian CIOs and highlights that the absence of established best practices complicates preparations for the Act. Finally, EU policy experts Rob van Kranenburg and Gaelle Le Gars say that future AI laws are on the horizon, the AI liability directive notable among them.