Welcome to the EU AI Act Newsletter, a brief biweekly newsletter by the Future of Life Institute providing you with up-to-date developments and analyses of the proposed EU artificial intelligence law.
Legislative Process
The Council of the EU adopted its common position on the AI Act. The definition of AI systems is narrowed to machine learning and knowledge-based approaches. Certain practices, such as social scoring, are prohibited also by private actors. General purpose AI systems, which can be used for many different purposes, are addressed through implementing acts. The position explicitly excludes AI systems for national security, defence, and military purposes, and simplifies the compliance framework is simplified with clarifications to the conformity assessment procedures. The AI Board's role is strengthened, and its involvement of stakeholders is created through a permanent subgroup. Penalties for infringements aim to be proportionate for SMEs and startups. Transparency is increased for high-risk AI systems, including through a registry and an obligation to inform people when exposed to an emotion recognition system. Support for innovation includes AI regulatory sandboxes to test innovative AI systems in real-world conditions, unsupervised real-world testing of AI systems under specific conditions, and alleviating administrative burdens for smaller companies. The next step is for the Council to enter negotiations with the European Parliament once the latter adopts its own position.
The European Commission published a draft standardisation request to the European Committee for Standardisation (CEN) and the European Committee for Electrotechnical Standardisation (CENELEC) in support of safe and trustworthy AI. The request states that standards are important to support the implementation of EU AI policies to ensure the safety and protection of fundamental rights for EU citizens. Standards can also support establishing equal conditions of competition and a level playing field for the design and development of AI systems, especially for SMEs. To advance technical harmonisation in trustworthy AI and prepare for the implementation of the proposed AI Act, these standards will specify requirements for risk management, data governance, transparency, human oversight, accuracy, robustness, resilience, quality management, and provide procedures for conformity assessment.
Analyses
Access Now circulated a joint statement arguing that the proposed AI Act does not adequately address harms stemming from the use of AI in migration contexts. They state that for marginalised communities and migrants, refugees and asylum seekers, AI technologies facilitate surveillance, criminalisation, discrimination and violence. For these reasons, they recommend the following changes to the AI Act. 1) Prohibit unacceptable uses of AI systems in migration contexts, such as predictive analytic systems used to interdict migration. 2) Expand the list of high-risk AI systems used in migration to include systems like biometric identification. 3) Ensure the AI Act applies to all high-risk AI systems in migration, including those in EU IT systems. 4) Ensure transparency and oversight measures apply, including requirements to conduct and publish fundamental rights impact assessments and register use of high-risk AI systems in a public database.
European Digital SME Alliance issued an opinion welcoming the replacement of the AI Board with an EU AI Office, an independent body with funding and staff, to respond to the technical and governance challenges of AI. They argue that an AI Office would be well-equipped to issue opinions and recommendations on technical standards and regulatory sandboxes, and conduct capacity building. It would also monitor innovation and guide the AI regulatory landscape, and benefit from including stakeholders such as academics, civil society, businesses, and SMEs in its advisory board. They state that new regulatory bodies have been established in the past to respond to new technologies, such as the European Atomic Energy Community and the European Union Agency for Cybersecurity, and an EU AI Office would be similarly critical investment.
The US-EU Trade and Technology Council published a roadmap on evaluation and measurement tools for trustworthy AI and risk management in relation to the NIST risk management framework on the US side and the AI Act on the EU side. The roadmap emphasises that approaches should be supported by science, international standards, shared terminology, and validated metrics and methodologies. They suggested the following activities to align EU and US approaches: developing shared terminologies and taxonomies; leading and cooperating on international technical standards and risk management tools, and monitoring and measuring existing and emerging AI risks.
Mozilla summarised an event about general-purpose AI systems that they organised. One of the takeaways from the event is that the AI Act is built on a product safety model, but does not account for the different ways AI systems can be misused or used, as general purpose AI (GPAI) does not have a single purpose. GPAI models are often trained with biased data sets and increasingly used in consumer products. It was argued that regulation should address these models and place obligations on the actors best able to comply across the complex AI value chain. In addition, it was discussed that open source AI enables innovation but also poses risks. It was stated that regulation should be technically informed and aware of its effects, and avoid concentrating power and enabling big companies to leverage GPAI systems. It was recommended that the EU should invest in computing infrastructure to enable scrutiny of big AI models, and promote decentralisation rather than big companies controlling AI systems behind APIs.
The Center for Data Innovation published an analysis showing that the draft AI Act text will unintentionally over-classify many common products, like smartphones and IoT devices, as being high risk AI systems. The authors argue that this is a problem because the legal obligations associated with a high risk classification would drive up the cost of purchasing and using these products, both for EU consumers and EU businesses. The analysis shows that the high risk AI classification will unintentionally apply to many products where the built-in AI does not have any safety-critical function. The products affected range from smartphones and IoT devices to watercrafts. To solve this over-classification problem, the authors propose specific edits to Article 6 of the Act.
Holistic AI published an overview of possible penalties in case of non-compliance with the AI Act. The post highlights that the AI Act sets forward a three-level structure for fines against infringements: penalties are given for non-compliance with the prohibited practices, with the obligations for high-risk systems, and infringement of the duty of cooperation with the competent national authorities. There is a descending order in the proposed severity of fines. The heftiest fines are imposed for violating the prohibition of specific AI systems, up to 30 million Euros or 6% annual turnover. SMEs and start-ups are given lower fines, with up to 3% of annual worldwide turnover.