Welcome to the EU AI Act Newsletter, a brief biweekly newsletter by the Future of Life Institute providing you with up-to-date developments and analyses of the proposed EU artificial intelligence law.
Legislative Process
The Czech Presidency of the Council of the EU published their second compromise text on 15 July. Some of the key proposals to highlight are the narrowing down of AI to systems developed through machine learning techniques and knowledge-based approaches, the addition of an article for the Commission to adopt implementing acts to further specify and update AI techniques, and the extension of provisions to account for when classifying AI systems as high risk. The latter includes the significance of the output of the AI system as well as the immediacy of the effect. The delegations will be asked to provide suggestions on the entire second compromise proposal by 2 September 2022.
According to EURACTIV, The European Telecommunications Standards Institute (ETSI) has suggested to the European Commission that the definition of AI and categorisation of high-risk applications in the AI Act be left to technical standards. ETSI claims that standardisation bodies will need to ensure that the definitions are consistent across the technical standards, but some members of civil society think that the delegation of a political decision to technical bodies is undemocratic. Some of the challenges standardisation bodies are faced with are that categories of risk depend on concrete use cases, categorisation of high-risk systems implies value-based assessments, and that an AI system might have a low risk individually but become high risk when it interacts with other AI systems.
Analyses
Center for Democracy and Technology wrote a blog post about whether the AI Act will adequately address access to remedy for victims of any discrimination from AI. The post argues that the initial AI Act draft failed to include opportunities for redress for individuals most likely to be adversely affected by AI as well as for public interest organisations who have an important role in representing those individuals. The author of the blog post adds that a joint draft report from the two co-rapporteurs in the LIBE and IMCO Committees improves the issue of access to redress by introducing a right for individuals and groups to lodge a complaint to the national supervisory authority in the case of a breach to their health, safety or fundamental rights. The author generally welcomes this improvement but wants the redress mechanisms to be more concrete, actionable and effective.
Future of Privacy Forum gave an overview in their blog of what the conformity assessment in the AI Act is. Generally speaking, the conformity assessment is the "process of verifying whether the requirements set out in Title III, Chapter 2 of this Regulation relating to an AI system have been fulfilled”. The blog post also highlights that the AI Act establishes a presumption of compliance with the requirements for high-risk AI systems if such a system is in conformity with relevant harmonised standards. Furthermore, a conformity assessment needs to be performed before a high-risk AI system is placed on the EU market or put into service as well as when a high-risk system already on the market is substantially modified. The post provides more information on who needs to perform this assessment and how exactly it should be done.
Science Business reported in an article that the UK government has decided to choose a different approach to regulating AI than the EU's AI Act. According to the article, UK claims that it will allow different regulators to take their approach to the use of AI in a range of settings rather than giving responsibility for AI governance to a central regulatory body like in the AI Act. The UK does not seem to mention any uses of AI that would be prohibited and does not intend to create any new laws to regulate AI for now. The UK is also critical of creating a list of risky AI uses and instead puts forward six principles for regulators to follow such as transparency and explainability among others.
Deloitte published a blog post on risk management requirements of the AI Act proposal. Article 9 outlines the need to establish, implement, document, and maintain effective risk management systems in relation to high-risk AI systems. The post highlights that providers of such systems will be required to conduct periodic risk assessments to identify all known and foreseeable risks associated with their AI systems. They emphasise that foreseeable risks are those that include “the use of an AI system in a way that is not in accordance with its intended purpose, but which may result from reasonably foreseeable human behaviour or interaction with other systems”. The post also argues that risk management systems will have to include the tracking of AI systems after they are deployed and plans to mitigate any risk or failure once they are distributed to users.
The Irish Council for Civil Liberties (ICCL) has a new op-ed on EURACTIV arguing that the AI Act should prevent, not just report, serious incidents. It claims that it should require that “near-misses” are reported and addressed. The op-ed problematises that the AI Act requires only serious incidents to be reported and there are no reporting obligations for anything less. One example of this is when a self-driving car runs a red light but does not hit anyone. The author of the op-ed recommends that the EU AI database log and publish near-miss reports because this will 1) inform about near-misses that others have encountered, 2) help developers to widen the range of tests for AI systems, 3) assist monitoring organisations with this information to update the regulation annexes, and 4) give researchers new research ideas to make safer AI systems.