Welcome to the EU AI Act Newsletter, a brief biweekly newsletter by the Future of Life Institute providing you with up-to-date developments and analyses of the proposed EU artificial intelligence law.
Legislative Process
After the political agreement on the EU AI Act late last year, the European Commission published their answers to the most common questions about the Act. The questions were as follows: 1) Why do we need to regulate the use of Artificial Intelligence? 2) Which risks will the new AI rules address? 3) To whom does the AI Act apply? 4) What are the risk categories? 5) How do I know whether an AI system is high-risk? 6) What are the obligations for providers of high-risk AI systems? 7) What are examples for high-risk use cases as defined in Annex III? 8) How are general-purpose AI models being regulated? 9) Why is 10^25 FLOPs an appropriate threshold for GPAI with systemic risks? 10) Is the AI Act future-proof? 11) How does the AI Act regulate biometric identification? 12) Why are particular rules needed for remote biometric identification? 13) How do the rules protect fundamental rights? 14) What is a fundamental rights impact assessment? Who has to conduct such an assessment, and when? 15) How does this regulation address racial and gender bias in AI? 16) When will the AI Act be fully applicable? 17) How will the AI Act be enforced? 18) Why is a European Artificial Intelligence Board needed and what will it do? 19) What are the tasks of the European AI Office? 20) What is the difference between the AI Board, AI Office, Advisory Forum and Scientific Panel of independent experts? 21) What are the penalties for infringement? 22) What can individuals do that are affected by a rule violation? 23) How do the voluntary codes of conduct for high-risk AI systems work? 24) How do the codes of practice for general purpose AI models work? 25) Does the AI Act contain provisions regarding environmental protection and sustainability? 26) How can the new rules support innovation? 27) Besides the AI Act, how will the EU facilitate and support innovation in AI? and 28) What is the international dimension of the EU's approach?
Analyses
Euractiv's Théophane Hartmann reported that the French government has faced criticism over its stance in the AI Act negotiations. Allegations centre around the influence of the former digital state secretary Cédric O, who is accused of having conflicts of interest. Senator Catherine Morin-Desailly claimed that Cédric O and his company, Mistral, which represents American corporate interests, influenced the government's position to weaken the AI regulation. Digital Minister Jean-Noël Barrot refuted these accusations, insisting on the government's commitment to the general interest and denying that it acted as a spokesperson for private interests. He argued that fostering AI champions in Europe is crucial for protecting citizens and the creative industry. However, Barrot's stance was criticised by Pascal Rogard of the Society of Dramatic Authors and Composers for not supporting culture, the creative industry, or copyrights. The High Authority for Transparency in Public Life had barred Cédric O from lobbying or owning tech sector shares for three years, yet he invested in Mistral AI and did not fully declare his holdings. Commissioner Breton also criticised O, questioning his commitment to the public interest.
Javier Espinoza, EU Correspondent at the Financial Times reported that Margrethe Vestager, the EU’s competition and digital chief, defended the proposed AI Act against criticisms, including those from French President Emmanuel Macron. Vestager emphasised that the legislation would not hinder innovation and research but rather enhance it by providing clear rules for building foundational models, like those underlying generative AI products. She argued that the Act would offer predictability and legal certainty for both creators and users of these technologies, ensuring that regulatory measures do not stifle innovation. Macron had expressed concerns that the AI Act might cause European tech companies to fall behind their counterparts in the US and China. The law still needs to be ratified by member states in the coming weeks, but France, alongside Germany and Italy, is partaking in early discussions about seeking alterations or blocking the law. Vestager highlighted that regulation is crucial for fostering trust in the market, which is necessary for investment and practical use.
David Haber, CEO of Lakera, published a commentary on Fortune about his experience as an advisor to the EU on the AI Act. Haber states that initially, the Act focused on regulating narrow and predictive AI, addressing issues like AI in diagnostics and creditworthiness evaluations. However, the advent of generative AI presented a significant challenge, forcing policymakers to consider whether to stick to their original narrow focus or adapt to the rapidly evolving AI landscape. The EU ultimately chose a hybrid approach, where the Act remains largely true to its original intent but includes an addendum to address generative AI. The Act is still evolving, with crucial technical details pending beyond high-level pieces around transparency requirements and punitive measures. The next phase will involve incorporating industry-specific controls and integrating the Act with existing regulations.
The Future Society analysed how much AI Act compliance would cost for general-purpose AI (GPAI) providers. They first estimate the total investment needed to develop cutting-edge GPAI models, considering significant expenses for hardware, chips, and engineers. The compliance costs then add in internal and external risk evaluations, technical documentation, and quality management systems, with conservative assumptions like high San Francisco salaries and the need for additional staff and secondary evaluations. The findings reveal that compliance costs for GPAI models are minimal, ranging between 0.07% and 1.34% of the total capital expenditure required to build such models. This result is based on models ranging from 10^24 to 10^26 FLOPs of training computation. The analysis suggests that these costs are good value for ensuring the safety, security, and reliability of these technologies, and seen as beneficial for EU citizens and the digital economy.
Absolutely thrilled to see the EU addressing the FAQs on the AI Act! It's crucial for transparency and understanding, especially with the evolving landscape of artificial intelligence legislation. Hoping for continued clarity as more questions inevitably arise. Kudos to the European Commission for taking proactive steps in keeping everyone informed! Thanks to this EU AI Act newsletter 🌐🤖 #AI #EU #Transparency #TechnologyRegulation