Welcome to the EU AI Act Newsletter, a brief biweekly newsletter by the Future of Life Institute providing you with up-to-date developments and analyses of the proposed EU artificial intelligence law.
Legislative Process
The Internal Market and Civil Liberties Committees of the European Parliament have adopted a draft negotiating mandate for the AI Act, with 84 votes in favour, 7 against, and 12 abstentions. The draft seeks to impose a uniform definition of AI that can apply to current and future systems. It prohibits AI systems that deploy subliminal or purposefully manipulative techniques, exploit people's vulnerabilities, or are used for social scoring. It also bans intrusive and discriminatory uses of AI systems, including real-time remote biometric identification systems in publicly accessible spaces, among other bans. The classification of high-risk areas has been expanded to include harm to people's health, safety, fundamental rights, or the environment. In the draft, providers of foundation models have to guarantee the protection of fundamental rights, health and safety, the environment, democracy, and the rule of law. The mandate tries to boost innovation by making exemptions to these rules for research activities and AI components provided under open-source licenses, and it promotes regulatory sandboxes to test AI before deployment. It also seeks to boost citizens' right to file complaints and receive explanations of decisions based on high-risk AI systems. The draft mandate needs to be endorsed by the whole parliament before negotiations with the Council on the final form of the law can begin. The vote is expected to occur during the 12-15 June sessions.
Analyses
Natasha Lomas from Tech Crunch summarised some expert views on the draft from the European Parliament. Digital rights group EDRi highlighted that they have been advocating for some of the revisions made to the Commission draft, including the full ban on facial recognition in public alongside bans on predictive policing and emotion recognition. Sarah Chander, EDRi Senior Policy Advisor, stated that the parliament is sending a clear message that some uses of AI are too harmful to be allowed. However, EDRi also noted that there are still areas of concern, the use of AI for migration control being one, and developers being able to decide if their system is high-risk AI being another. Kris Shrishak, a Senior Fellow at the Irish Council for Civil Liberties (ICCL), suggested that as well as the parliament strengthening enforceability by explicitly allowing regulators to perform remote inspections, regulators should have access to AI systems’ source codes for investigations. He added that the exemptions for research activities and AI components provided under open source licences might create loopholes. Alexander Sander, a Senior Policy Consultant for the Foundation, alternatively, thinks it unlikely that big tech will be able to outsource everything to micro enterprises and exploit any such loopholes.
Deloitte consultants wrote a summary of the parliament's amendments looking towards the impact on financial institutions. The authors explain that the AI Act gives little attention to AI tools deployed in the financial sector, except for credit scoring models and risk assessment tools for insurance, as AI systems deployed to evaluate credit scores or creditworthiness and risk assessment in life and health insurance are likely to be classified as high-risk. The authors suggest that financial institutions should ensure they comply with the AI Act when deploying AI technology for providing their services, especially for those relying on high-risk AI systems and providing services to natural persons or retail clients. Finally, the authors state that the same bodies already in charge of financial supervision will integrate the AI Act and market surveillance activities into their existing supervisory practices under the financial services legislation.
Siddhant Chatterjee, Public Policy Associate at Holistic AI, wrote a blog post summarising the regulatory approaches for generative AI and foundation models around the world. Generative AI, which includes large language models, transformers, and other neural networks, has the potential to bring benefits across a variety of use-cases, from commerce and cancer research to climate change. However, there are concerns that these models might be misused to proliferate misinformation and disinformation, create inappropriate content, and harvest huge quantities of personal data without informed consent. The European Union is seeking to establish comprehensive regulatory governance through the AI Act, introducing a tier-based approach for foundation models and generative AI, with stricter transparency obligations for the latter. The United States is seeking to understand what kind of data is needed to conduct algorithmic audits, while Massachusetts is the only state to have introduced a bill aimed at generative AI that mandates privacy and algorithmic transparency standards for companies developing such models. China has issued draft rules to regulate generative AI providers, requiring compliance with measures on data governance, bias mitigation, transparency, and content moderation, as well as mandating security assessments before releasing generative AI services to the public. Meanwhile, India and the UK have taken a light-touch approach to regulating generative AI.
Yonah Welker, Board Member at Yonah.ai, wrote an article for the OECD AI website about how the EU could consider disabilities when constructing the AI legislation. Welker explains that AI systems may discriminate against individuals with disabilities, cognitive and sensory impairments, or autism spectrum disorders, leading to inaccurate identification, discrimination, or even life-threatening scenarios. For example, hiring and job search platforms have reportedly discriminated against individuals with disabilities, and social networks have mistakenly identified people with disabilities as non-human. Since the inception of the AI Act disability organisations have been advocating for more focus on disability-specific cases, for vocabulary and legal frameworks to address negative scenarios and misuse of high-risk systems, and for the prohibition of specific unacceptable-risk systems. However, Welker states that further updates and development are needed to address more substantially the needs of those with disabilities, particularly in high-risk systems like policing, autonomous weapons or law-enforcement.