Welcome to the EU AI Act Newsletter, a brief biweekly newsletter by the Future of Life Institute providing you with up-to-date developments and analyses of the proposed EU artificial intelligence law.
Legislative Process
Florin Zubașcu from Science Business reported that the EU is calling for a self-regulation initiative for generative AI products, such as ChatGPT, due to the absence of proper regulation. The EU hopes to lead a joint initiative with the US to establish a voluntary code of conduct to which companies can sign up to. The proposal was presented by the European Commission's Executive Vice President Margrethe Vestager at the US/EU Trade and Technology Council meeting. The EU has presented an AI legislation with its AI Act, but its advancement through the European Parliament and Council has been slow. Vestager hopes that talks between the three EU institutions start in the coming weeks, reach a deal by the end of the year, and see legislative impact in two to three years. To fill the legislative void in the meantime, the EU hopes to seal an international agreement between the G7 countries and invited partners. The EU aims to advance a draft of the AI code of conduct with industry input in the coming weeks and hopes that other countries, such as Canada, the UK, Japan, and India, will back the effort.
Analyses
DIGITALEUROPE wrote a report based on in-depth interviews with nine European start-ups and SMEs that took place between March and April 2023. The interviews covered topics such as familiarity with the AI Act proposal, its impact on compliance and business, and familiarity with AI standards. The report presents six recommendations to ensure that the upcoming AI Act and its regulatory sandboxes can create the right balance of compliance and competitiveness for European companies. These recommendations include the need for an investment plan, more clarity on the scope of the proposal, the existence of harmonised standards, a focus on international alignment, gradual implementation and practical support for SMEs, and the allowance of modifications down the line based on practical experience.
Muhammed Demircan, PhD Researcher at Vrije Universiteit Brussel, published a summary on the Kluwer Competition Law blog about what obligations the AI Act will put on the deployers of high-risk AI systems. The Act places primary responsibility on providers who develop AI systems with the intention of placing them on the market or putting them into service under their name or trademark. Users or AI deployers, such as work organisations, banks, and supermarkets, are crucial to ensuring the safe and fair use of AI systems, and will outnumber providers. High-risk AI systems are subject to strict obligations, including a certification regime, which comprises critical infrastructures, biometric identification, employment, administration of justice, and other areas. The European Parliament version of the AI Act has significantly changed the obligations of deployers of high-risk AI systems, requiring them to conduct a fundamental rights impact assessment, among other changes.
Lucia Yar of EURACTIV summarised an interview with Juraj Čorba, representative of the Slovak Ministry for Investments, Regional Development and Informatisation. Yar reports that member states without a flourishing AI ecosystem are using negotiations of the AI Act to lessen the potential burden of implementing the EU's AI rules. Slovakia, which is still developing its AI ecosystem, sees opportunities for comparative advantage if it succeeds in implementing the rules. In addition to the AI Act, the final regulation will be defined by a complex set of laws and EU initiatives, including the Digital Services Act, the Digital Markets Act, and numerous other regulations and directives. The key focus of Slovakia is on enforcement and ensuring high-quality public supervision structures that will be in charge of applying the entire legislation. Like several other countries in Central and Eastern Europe, Slovakia is struggling with a substantial lack of qualified workforce. It is trying to create a regulatory and institutional framework that will allow it to excel in the European or global market by attracting investments.
Shiona McCallum and Chris Vallance of the BBC, reported that OpenAI CEO Sam Altman has retracted his earlier threat to leave the EU block – which he had made in response to the upcoming law on AI, deeming the law to be "over-regulating". The proposed AI Act would require generative AI companies to disclose which copyrighted material has been used to train their systems to create text and images. Altman was concerned that OpenAI would find it technically impossible to comply with some of the safety and transparency requirements of the AI Act. Yet after widespread coverage of his comments, he backtracked and tweeted that OpenAI is excited to continue to operate in the EU, with no plans to leave.
Kris Shrishak, Senior Fellow at Irish Council for Civil Liberties, wrote an op-ed arguing that the EU should follow China's draft law on generative AI, which prohibits the unconsented use of copyright-protected content and personal data for the training of AI models. Shrishak states that while the EU has taken a tough stance on regulating generative AI, it has failed to address the issue of using copyrighted material in AI development. Shrishak also notes that data protection regulators are using existing tools to address AI risks: the European Data Protection Board launched a task force to exchange information related to data protection enforcement; the UK competition authority initiated an investigation into generative AI; and the Italian Data Protection Authority forced OpenAI to make limited data protection improvements. However, Shrishak emphasises that the EU continues to rely on self-assessments by AI developers rather than third-party assessments, and notes this could lead to enforcement issues, as seen with GDPR.
The handling of general-purpose AI in the EU AI Act looks like a mess to me!
In particular, Version 1.1 of the Draft Compromise Amendments from May 16, 2023 introduces the notions of "foundation models" and "general purpose AI". In particular, Article 3 (Definitions) states:
(1c) ‘foundation model’ means an AI model that is trained on broad data at scale, is
designed for generality of output, and can be adapted to a wide range of distinctive
tasks;
(1d) ‘general purpose AI system’ means an AI system that can be used in and adapted to
a wide range of applications for which it was not intentionally and specifically
designed;
And the difference between these two would be... what, exactly?
Section (60e) clarifies slightly by asserting "each foundation model can be reused in countless downstream AI or general purpose AI systems". From this, I gather that the authors would see GPT-4 as a foundation model and ChatGPT (adapted from GPT-4) as a general-purpose AI. But, at a technical level, this feels like a superficial and transitory distinction, analogous to two different user interfaces atop the same database.
Beyond these two definitional mentions of "general purpose AI", I can find only one other passing reference to "general purpose AI" in Article 28 1 (ba), which is a clause noting that a "general purpose AI system" is considered an "AI system". This seems like a redundancy analogous to "This applies to all people, including George."
Also, is a "general purpose AI" derived from a foundation model still itself considered a foundation model? This seems like a critical question, because the act says a lot about foundation models, but (as noted above) almost nothing about general purpose AI. And, if not, rules on foundation models could be evaded by making the superficial transition to a "general purpose AI".
Totally confusing. At least to me.