Welcome to the EU AI Act Newsletter, a brief biweekly newsletter by the Future of Life Institute providing you with up-to-date developments and analyses of the proposed EU artificial intelligence law.
Legislative Process
EURACTIV reported that the European Parliament’s co-rapporteurs of the AI Act have proposed for the European Commission to be able to extend the list of high-risk systems and prohibited practices after the legislation is passed. In the original text, the Commission could not change the list of high-risk areas apart from modifying or deleting use cases. With the new proposal, the Commission could even extend the list of prohibited AI applications. According to the article, these changes would have to be implemented through yearly reviewing by the AI Board and national authorities. Discussions about whether an AI board is enough or whether a more powerful AI office needs to be created were left unaddressed.
The Council of the EU published a new compromise text which may be their last. The text specifies that the AI Act does not apply to the obligations of users who are natural persons using AI systems in the course of a purely personal non-professional activity (aside from transparency requirements). Regarding the definition of AI, ‘a certain level’ of autonomy has been replaced by ‘elements’ of autonomy. The AI value chain was added as another aspect that the Commission will have to take into account when preparing implementing acts in relation to general purpose AI systems. The word ‘remote’ has been reinstated as part of biometric identification. Finally, a new provision has been added exempting high-risk AI systems in the areas of law enforcement, migration, asylum and border control management, and critical infrastructure from the obligation to register in the EU database.
Analyses
The Future of Life Institute published an open letter on general purpose AI systems in the AI Act. Ten civil society organisations state that the trend towards more general and capable systems is unmistakable and that these systems come with great potential for harms if left unchecked. Some harms have already occurred, with systems propagating extremist content, encouraging self-harm, exhibiting anti-Muslim bias, and inadvertently revealing personal data. The letter advocates for the responsibility of complying with the obligations of the AI Act to be shared between the providers (developers) and the users (deployers). NGOs argue that shifting obligations of the AI Act to downstream users would make these systems less safe because these users will not have the same capacity to change or influence the behaviour of the model. They add, however, that users are best placed to comply with requirements in relation to the specific high-risk use case, especially when the use cases are novel and cannot be reasonably foreseen by the providers.
EURACTIV summarised a document on the AI Act by the United States administration that was sent to some EU capitals and the European Commission. The document advocates for a narrower AI definition, exemption for general purpose AI systems (GPAIS), and a more specific risk assessment process in the AI Act. On the definition of AI, the document emphasises that the Council's definition still includes systems that are not sophisticated enough to be covered in the AI Act and recommends to narrow it down based on the OECD definition. On GPAIS, the US document claims that risk-management obligations on the providers could be very burdensome, technically difficult or even impossible. In addition, they state that the GPAIS providers should not have to cooperate with the users to help them comply with the AI Act either because that would require disclosing confidential business information.
Access Now and nine other NGOs sent a letter to the Czech Deputy Prime Minister for Digitisation asking for increased fundamental rights protections in the Council's AI Act position. The letter points to the following areas of improvement: 1) include rights and redress mechanisms to empower people affected by AI systems; 2) ensure meaningful accountability and public transparency obligations on public uses of AI systems and all ‘users’ of high-risk AI; 3) ensure meaningful and balanced civil society participation in all aspects of the AI Act; 4) mandate accessibility requirements for providers and users of all AI systems throughout the life cycle; and 5) implement comprehensive prohibitions on all AI systems posing an ‘unacceptable risk’ to fundamental rights.
Center for Data Innovation published an article arguing that a ban on AI systems that use subliminal techniques is unnecessary considering other legal safeguards, and would negatively affect the development and adoption of legitimate AI applications. The author claims that there is no consensus on what is meant by “subliminal" nor any consensus that subliminal techniques work, and that, at best, subliminal stimuli can bring forth already-intended actions. The article adds that there are other laws that already address the issue of subliminal manipulation, including the updated Audiovisual Media Services Directive and the Digital Services Act. Finally, the author recommends that instead of prohibiting subliminal techniques, companies should disclose the use of subliminal techniques to their customers, or add them to the high-risk list in case disclosure is inappropriate.
MedTech Europe and 11 other industry stakeholders published a joint statement calling for the alignment of the proposed AI Act with sector-specific product safety legislation. The industry actors state that if issues of overregulation and misalignment are not addressed, the resulting regulatory uncertainty could adversely impact the access of European citizens to safe and high-quality goods ranging from machinery and tools used in the automative and furniture sectors, to medical technologies, and many others. The main recommendation in the letter is to maintain consistency with the New Legislative Framework, including having a clearer definition of risk. In addition, it recommends to limit the scope of AI applications to products with evolving behavior, allow the freedom to allocate responsibilities through contractual agreements, and prioritise harmonised standards over common specifications.
Can you cover the newly adopted neuro rights laws by the UN please