Welcome to the EU AI Act Newsletter, a brief biweekly newsletter by the Future of Life Institute providing you with up-to-date developments and analyses of the proposed EU artificial intelligence law.
Legislative Process
According to EURACTIV's Luca Bertuzzi, the European Parliament is planning to vote on the AI Act on 26 April. While many questions have been settled, a few critical issues remain, including what requirements will apply to general purpose AI, how to deal with biometric recognition systems, and what to do about enforcement and governance. In a concession to the centre-right European People's Party, the definition of AI was aligned with the OECD's definition. The use of real-time facial recognition software is set to be banned, but allowed ex post as a high-risk application. There are still questions about whether the AI Office, a new EU body, will have a purely coordinating role or additionally have enforcement powers on cross-border cases, but it is unclear where the resources would come from.
Analyses
Ryan Browne at CNBC reported that Italy has become the first Western country to ban the use of ChatGPT, citing concerns over a data breach, ChatGPT's lack of legal basis for mass collection of data, its absence of age restrictions and its potential to spread misinformation. The move has highlighted a lack of concrete regulations for AI, with the European Union and China being among the few jurisdictions developing tailored rules. Britain is not proposing restrictions on ChatGPT, or any kind of AI; instead they have proposed the following principles for companies to follow: safety, transparency, fairness, accountability, and contestability. The EU with its AI Act is considering ChatGPT to be a form of general purpose AI and could implement some measures applied to high-risk applications. The US has not taken any action to limit ChatGPT, although a complaint has been made to the Federal Trade Commission alleging that OpenAI's latest large language model, GPT-4, violates the agency's AI guidelines. ChatGPT is not available in China, North Korea, Iran or Russia, but China's regulation on deepfakes and recommendation algorithms could in theory apply to ChatGPT-style AI systems.
Tambiama André Madiega, policy analyst at the European Parliamentary Research Service, wrote a summary of general purpose AI, including what it is, what applications exist, what concerns it raises, and how it may be regulated by the AI Act. A general purpose AI, also known as a foundation model, is trained on broad sets of unlabelled data that can be used for different tasks with minimal fine-tuning, and is accessible to downstream developers through APIs and open-source access. OpenAI, Microsoft, Google and its subsidiary DeepMind are developing general purpose AI tools, which have the potential to transform many areas, creating everything from new search engine architectures to personalised therapy bots. Many experts are calling for governance and oversight of these systems, as they have been shown to present ethical and social risks, including perpetuating stereotypes and social biases, as well as providing false or misleading information. Some experts are calling for a 6-month pause in the training of AI systems more powerful than GPT-4. Stakeholders are also calling for general purpose AI to be included in the scope of the upcoming EU AI Act, with proposals ranging from creating a separate risk category for general purpose AI systems to discouraging API access for general purpose AI use in high-risk AI systems.
Christine Galvagna, senior researcher at the Ada Lovelace Institute, wrote a discussion paper about civil society participation in standards development in the AI Act context. Galvagna emphasises that the EU's reliance on technical standards to provide detailed guidance for compliance with the AI Act's requirements may leave fundamental rights and other public interests unprotected, as standard development bodies lack the expertise and legitimacy to interpret human rights law and other policy goals. One proposed approach is to boost civil society participation in the standardisation process, but this is unlikely to provide the necessary political and legal guidance to interpret essential requirements. Another proposal focuses on institutional innovations, suggesting the expansion of civil society participation through funding and the creation of a central hub, or the creation of a benchmarking institute to complement procedural and documentation-oriented standards with more substantive standards. The European Commission could also leverage its right to develop common specifications in order to address safety and fundamental rights concerns not captured by technical standards.
EURACTIV's Luca Bertuzzi also wrote a summary of a report from EU law enforcement agency, Europol, which warned that large language models like ChatGPT could be employed for online fraud and other cybercrimes. Europol explains that these models could be exploited by criminals to carry out various crimes, including phishing, online fraud, propaganda, hate speech, and disinformation. The moderation rules of ChatGPT can be circumvented through prompt engineering, allowing malicious actors to obtain specific outputs from the AI model. ChatGPT is considered a general purpose AI, an AI model that can be adapted to carry out various tasks. As the European Parliament finalises its position on the AI Act, MEPs have discussed introducing some strict requirements for these models, such as risk management, robustness and quality control.
appliedAI's Managing Director Andreas Liebl and Head of Trustworthy AI Till Klein published a study of 106 AI systems in enterprise functions; it identified uncertainties for AI users based on the risk classification of the AI Act. The study found that 18% of the AI systems were classified as high-risk, 42% as low-risk, and for 40%, it was unclear whether they were high-risk or not. The areas of unclear risk classification mainly revolve around critical infrastructure, employment, law enforcement, and product safety. The study's aim was to highlight the potential influence of the classification rules and their interpretation on the adaptation of AI in companies. The authors argue that it is important to have clear classification rules in the AI Act to reduce ambiguities and create certainty for investment plans while protecting health, safety, and fundamental human rights.