The EU AI Act Newsletter #39: Spain Urges to Avoid Overreacting Against the Act
20/10/23-07/11/23
Welcome to the EU AI Act Newsletter, a brief biweekly newsletter by the Future of Life Institute providing you with up-to-date developments and analyses of the proposed EU artificial intelligence law.
Legislative Process
According to Euractiv's Luca Bertuzzi, the EU is in the final stages of negotiating the AI Act. The recent trilogue on 24 October among the Council, Parliament, and Commission reached consensus in classifying high-risk AI applications and overseeing powerful foundation models, albeit with pending details on prohibitions and law enforcement. Despite a negative legal review from the Parliament's office, the proposal on classifying high-risk AI remained largely the same. The tiered approach on foundation models seems to have broad support, but how exactly to define the top tier of 'very capable’ foundation models remains a challenge. A political agreement is expected at the next trilogue on 6 December but not guaranteed. Disagreements persist, notably on which AI applications should be prohibited and what exceptions should be left to law enforcement agencies.
Under the Hiroshima AI process, the G7 leaders have reached an agreement on International Guiding Principles and a voluntary Code of Conduct for AI developers. The EU supports these principles alongside the ongoing creation of legally binding rules within the AI Act. These international standards aim to complement the EU regulations, uphold similar values, and ensure trustworthy AI development. The eleven principles aim to provide direction for the responsible development, deployment, and use of advanced AI systems such as foundation models and generative AI. They include commitments on risk and misuse mitigation, responsible information sharing, incident reporting, cybersecurity investment, and a labelling system for AI-generated content. The principles were developed jointly by the EU and other G7 members and have subsequently formed the basis for a detailed practical guidance for AI developers.
Analyses
Cristina Gallardo, a Senior Reporter at Sifted, wrote that Spain's AI and Digitalisation Minister, Carme Artigas, has urged calm among AI startup founders in response to the AI Act. Founders worry that the legislation might hinder Europe's competitiveness compared to global rivals like the US and China. Artigas emphasises that the Act's primary goal is not to stifle but to foster innovation, offering a two-year adaptation period and national sandbox initiatives to support companies. She highlights the extensive three-year contemplation behind the rules, assuring that no dimension has been overlooked. Negotiations for the final text continue, primarily focusing on categorising systems, law enforcement use of AI applications, regulating foundational models, and ensuring the Act remains relevant amidst AI's rapid advancements. Artigas underscores the Act's adaptability, intending to introduce mechanisms for timely updates and ensure it does not become obsolete. Despite challenges, she remains confident in reaching an agreement on the legislation by the end of the year.
Riesgos Catastróficos Globales (RCG) published a position paper on the AI Act trilogue presenting six recommendations for policymakers. It focuses on regulating frontier models and the recommendations include proposals to define frontier models, their evaluation, risk management systems, deployment safeguards, establishing an AI Office, and ensuring compliance of open-source models. More concretely, the paper suggests 1) third-party model evaluations and testing, 2) risk management throughout the lifecycle of the frontier model, 3) various safeguards such as monitoring any instances of serious malfunction, incidents or misuse, and prevention and contingency plans, 4) an independent AI Office to oversee evaluations and assess large-scale risks, 5) that providers of frontier models must comply with the regulation, irrespective of whether they are provided under free and open-source licenses.
The European Consumer Organisation (BEUC) expressed concern about the potential adoption of an ambiguous and inadequate approach for regulating generative AI systems like ChatGPT or Bard within the EU. BEUC emphasises the necessity of a robust legal framework to protect consumers from the risks posed by generative AI, such as manipulation, dissemination of false information, privacy violations, increased fraud and disinformation, and reinforcement of biases. The proposed approach for determining which generative AI systems fall under specific obligations is criticised as unclear and complex. This ambiguity creates uncertainty for regulators, consumers, and companies falling within the law's scope. BEUC highlights the risk that perhaps only AI systems developed by large companies will be adequately regulated, leaving a substantial number of systems subjected only to weak transparency requirements, and inadequately protecting consumers in numerous scenarios.
Creative Commons, Communia Association and Wikimedia Europe published a statement advocating for a balanced and tailored approach to regulating foundation models and for transparency within the AI Act more generally. They commend the Spanish presidency's consideration of a more tailored approach to foundation models. The statement stresses the importance of maintaining flexibilities for using copyrighted materials as AI training data, maintaining a delicate balance between users' rights and the necessities of scientific research and innovation. Moreover, the statement calls for a proportionate approach to transparency obligations, recommending fewer burdens on smaller players such as non-commercial actors and SMEs. Finally, it expresses concerns about the lack of clarity on the copyright transparency obligation – including on the scope and content of the obligation to provide training data summaries – and urges clearer guidelines for effective implementation through an accountable entity like the proposed AI Office.
Kris Shrishak, a Senior Fellow at the Irish Council for Civil Liberties, published an op-ed in Euractiv emphasising the need for greater regulatory empowerment within the AI Act. The current proposal focuses primarily on self-assessment by companies, lacking third-party assessments for most high-risk AI systems, notably those used in education, employment, and law enforcement contexts. Shrishak indicates that the current draft could burden regulators with inadequate powers and tools to enforce the regulation effectively. The absence of 'remote investigation' powers, the limitations for accessing AI system source codes, the insufficient computational resources, and the necessity for more skilled personnel within regulatory bodies are among Shrishak's primary concerns. Shrishak advocates for enhancements to empower regulators with remote investigation capabilities, simplified access to AI source codes during investigations, broader access to AI models beyond mere API, and a larger skilled workforce to enforce the legislation.
Thank you for the EU AI Act Newsletter! It's always an incredibly insightful summary. I'm eagerly looking forward to seeing how it evolves in these final months, paving the way for the EU AI Act to come into effect. 🚀 #AIRegulation #EUAIAct"