Welcome to the EU AI Act Newsletter, a brief biweekly newsletter by the Future of Life Institute providing you with up-to-date developments and analyses of the proposed EU artificial intelligence law.
Legislative Process
The upcoming Czech Presidency of the Council of the EU shared a discussion paper with the other EU governments. This draft provides an overview of the main outstanding issues where the Czechs will focus in the AI Act: namely, the definition of AI, which use cases are considered high-risk, the governance and enforcement framework, and whether national security should be exempted in the regulation. On the AI definition, questions are raised about its scope and the extent to which the Commission can change it in the future. On high-risk AI systems, one option is to have high-level criteria for evaluating what is deemed a significant risk. On governance and enforcement, under consideration is the degree of flexibility there should be for national laws to take precedence in certain circumstances. Finally, with regard to the national security exemption, additional clarity is sought on when exactly military AI systems are in scope and out of scope.
EURACTIV published an overview of the French Presidency's latest compromise text on the section of the AI Act related to AI regulatory sandboxes. According to the overview, this version is shortened and simplified to provide more flexibility for member states. The aim is to avoid overly constraining national authorities by either removing or weakening their obligations, as indicated by the change in wording from "rules" to "principles" regarding the Commission's role in legislation for sandboxes. This contrasts with the industry committee's (ITRE) proposal in the European Parliament to empower the Commission by strengthening its role as coordinator. The French Presidency further suggests that the relevant authorities consider the testing conducted in the sandboxes for compliance assessments. Testing of high-risk systems in real-world conditions would also be out of the scope of sandboxes under the French proposal.
The Spanish government and the European Commission held a launch event for Spain’s pilot Regulatory Sandbox on AI. It is the first AI sandbox set up in the EU since these were proposed in the AI Act, with the aim to facilitate testing of technical solutions and compliance procedures. At the session, the Secretary of State for Digitalization and Artificial Intelligence, Carme Artigas, confirmed that in October or November this year there will be an open call inviting companies to start testing high-risk AI systems in the sandbox. Testing will take place based on draft implementation guidelines which will be updated every three months. The first deliverables foreseen for this sandbox are an implementation guide and an auditing tool to support competent authorities conducting conformity checks and post-market monitoring. The first results from the sandbox will be published in Autumn 2023.
The European Parliament's culture and education committee (CULT) recently adopted their recommendations on the AI Act. These are, in a nutshell, increased transparency, the inclusion of more systems under the high-risk category, and requirements for national AI literacy programs. Higher transparency obligations are sought for those systems that "create or disseminate machine-generated news" and "recommend cultural and creative content". Systems used to monitor students during tests and detect cheating are recommended to be included in the high-risk systems, according to the committee. The rationale is that the risk of potential bias is high: for instance, skin color can lead to false detection of objects in students’ hands. Last but not least, member states should be obliged by the AI Act to "promote a sufficient level of AI literacy" to foster knowledge about how AI functions and what the possible benefits and risks are.
The European Parliament's industry committee (ITRE) adopted MEP Eva Maydell's draft report on the AI Act with a large majority. MEP Maydell highlights that the AI definition they put forward is largely in line with the OECD's definition, which she sees as being a means to set a global standard. A unified European approach to sandboxes is included in a new annex. According to Maydell, the proposal to include SMEs in the standardisation process is the cornerstone of this new draft, in addition to aiming for lowered compliance fees. A call for high but feasible standards is also made in relation to accuracy, robustness, cybersecurity and data requirements. Finally, a research exemption is put forward in which the regulation does not apply to AI systems specifically developed and put into service for the sole purpose of scientific research.
Analyses
The New Statesman published an op-ed, focusing on the debate around general purpose AI systems in the AI Act. It says that while the initial European Commission draft of the act last April did not mention general purpose AI systems, the French EU presidency made a significant turn in May this year. Based on its proposal, original providers of these systems would need to document their systems and comply with the standards that apply to other more specific AI. MEP Benifei agrees that the French proposal is a welcome improvement, but thinks that it is not sufficient as it would subject these systems only to a subset of requirements, with a self-assessment procedure, and an exemption for SMEs. According to the article, some tech companies are concerned that the new proposal could increase the scope of the legislation quite dramatically.
European Parliamentary Research Service published a briefing on the AI Act and AI regulatory sandboxes. The rationale of sandboxes is to support businesses to experiment with new products and services under a regulator's supervision. Regulatory sandboxes also come with a risk of being
misused or abused and hence, need a legal framework. One risk emerging from the AI Act proposal is that participants in sandboxes will not be exempted from liability for any harm caused. Another risk highlighted in the brief is that the volunteer-based adoption of the EU regulatory sandboxes regime by Member States generates risks of fragmentation. Finally, concerns over the conflict between the rules on regulatory sandboxes and the GDPR principles are raised.