Welcome to the EU AI Act Newsletter, a brief biweekly newsletter by the Future of Life Institute providing you with up-to-date developments and analyses of the proposed EU artificial intelligence law.
Legislative Process
Member states approve the act: The EU's 27 member states have unanimously endorsed the AI Act, affirming the political agreement reached in December. The Act faced technical refinement for over a month due to its complexity. Initially, reservations lingered among member states due to limited time for analysis, however, these concerns were resolved upon the adoption of the AI Act by the Committee of Permanent Representatives on 2 February. Notably, France, alongside Germany and Italy, advocated for a lighter regulatory approach for powerful AI models like OpenAI's GPT-4 to support select European startups that might challenge American companies. The European Parliament insisted on stringent rules for such models to prevent regulatory burden on smaller actors. The compromise entails a tiered approach, with general transparency requirements for all models and additional obligations for models with systemic risk. The EU member states can still further shape the implementation of the law through approximately 20 secondary legislative acts. The AI Act's formal adoption is expected after parliamentary committees' approval on 13 February and the whole Parliament's vote on 10-11 April.
AI Act Explorer: We at the Future of Life Institute have uploaded the latest version of the AI Act on our website. Our AI Act Explorer enables you to explore the contents of the proposed Act in an intuitive way, or search for parts that are most relevant to you. It contains the full version of the Act as of 21st January 2024. It will continue to be updated with newer versions of the text.
Analyses
European AI Office financing: Cynthia Kroet, Senior EU Policy Reporter at Euronews, reported that the European Commission's plan to establish a new AI Office – set to enter into force on 21 February and oversee regulations on general-purpose AI under the upcoming AI Act – has raised concerns among EU member states. During a recent meeting, countries like Denmark, Finland, and Sweden sought clarification on budget redistribution and staffing adequacy, given that the office is to be financed by the Digital Europe Program's budget reshuffle. The Commission assured that additional financing was not part of the EU's multi-annual budget plan. The AI Office, falling under the Commission's digital unit, requires around 100 staff members, with 80 to be recruited externally and 20 transferred internally. Alongside the AI Office, the Commission is recruiting for platform regulation tasks within DG Connect, aiming for over 100 full-time staff. Additionally, three other supervision and enforcement bodies will be established: the European Artificial Intelligence Board, an Advisory Forum, and a Scientific Panel.
EU has other laws than the AI Act: Max von Thun, Director of Europe & transatlantic partnerships at the Open Markets Institute, wrote an op-ed in Euractiv arguing that while awaiting the implementation of the AI Act the EU can take immediate action against Big Tech's AI dominance using existing competition and Digital Markets Act (DMA) powers. The AI Act's impact will not be felt for years, delaying crucial regulations amid the rapid growth of AI. Big Tech's control over AI, meanwhile, poses threats right now, like disinformation, surveillance advertising, anti-competitive practices, and copyright and privacy issues. von Thun states that Brussels can utilise its authority to scrutinise partnerships and investments, like Microsoft's OpenAI deal, using competition regulations and DMA. For example, Article 102 of the Treaty on the Functioning of the European Union allows for the investigation of abusive behaviour by dominant firms. The DMA, by targeting dominant digital platforms, can proactively regulate evolving technologies like AI, improving on the failures of traditional antitrust enforcement. According to von Thun, the Commission should swiftly designate dominant cloud services as gatekeepers under the DMA and bring foundation models under the purview of the legislation.
Facial recognition loopholes: Aida Sanchez Alonso at Euronews reported that the AI Act faces criticism over potential loopholes in its approach to facial recognition. Despite efforts to regulate such technologies, civil society groups fear an increase in mass surveillance due to the broad conditions for police use of facial recognition, and the legitimising effect it may have. The regulation differentiates between direct and remote use, and would be available only in specific, judicially authorised contexts. Live facial recognition should be restricted to preventing specific terrorist threats and identifying suspects of serious crimes, while remote use should only locate individuals convicted or suspected of serious crimes. While some MEPs see the Act as balancing security and civil rights, digital rights organisations criticise it for failing to end mass surveillance. Live facial recognition use in public spaces could increase, leading to heightened tracking of individuals. However, proponents argue that banning such techniques could undermine security efforts and fuel development by other countries like China.
Summary of the final AI Act draft: William Fry law firm's consultant Barry Scannell and partner Leo Moore have summarised the AI Act briefly. The final text of the Act outlines prohibited AI practices, including manipulation or deception for behaviour distortion purposes, AI systems exploiting vulnerabilities, biometric categorisation infringing on personal rights, classification of individuals or groups based on social behaviour or characteristics, real-time remote biometric identification in public for law enforcement, profiling individuals for criminal behaviour, unauthorised facial recognition databases, and inferring emotions in institutions without medical or safety reasons. Notably, AI systems not posing significant risks to the health, safety or fundamental rights of natural persons will not be considered high-risk if they fulfil specific criteria. General-purpose AI models must meet obligations such as documentation maintenance and cooperation with authorities. Deep fakes, defined as AI-generated deceptive content, require disclosure unless legally authorised or part of artistic works. In addition, watermarking AI-generated content is mandated for transparency, whereby providers must ensure effective marking and compatibility with technical standards.
Thank you so much for the AI Act Explorer, it's very useful.