Welcome to the EU AI Act Newsletter, a brief biweekly newsletter by the Future of Life Institute providing you with up-to-date developments and analyses of the proposed EU artificial intelligence law.
Legislative Process
The French Presidency of the Council of the European Union shared a new compromise text on general purpose AI systems in the AI Act. Their goal is to ensure a fair distribution of responsibilities along the AI value chain. In defining these systems, it references the possibilities of them being used in a plurality of contexts and being integrated into other AI systems. General purpose AI systems which may be used as high risk AI systems or as components of such systems must comply with Articles 9, 10, 11, 13(2) and 13(3)(a) to (e) and 15. The requirements do not apply if the provider "has explicitly excluded any high-risk uses in the instructions of use or information accompanying the general purpose AI system". Micro, small and medium-sized enterprises are exempted from these requirements and obligations.
Kai Zenner, Head of Office for MEP Axel Voss, created a collection of all the official AI Act documents in one place, including the latest wordings of the Council and the European Parliament. He plans to regularly update the page. Using this resource, you can read the original proposal of the AI Act by the European Commission, including the annexes. It also includes documents from both the Slovenian Presidency and French Presidency of the Council, as well as the IMCO/LIBE report and opinions of JURI, ITRE, CULT, ENVI and TRAN committees from the Parliament. Finally, it offers the opinions of other EU institutions and advisory bodies.
Analyses
CNBC published an op-ed discussing how China and Europe are leading on efforts to regulate AI. The op-ed mentions that China recently rolled out regulations governing online recommendation algorithms, while the EU has just finished negotiations on the Digital Markets Act and the Digital Services Act and is now working on the AI Act. The op-ed argues that China’s efforts focus on the the influence of tech companies on public opinion, but the EU AI Act seeks to regulate all of AI. Another difference that is highlighted is that the European approach will require pre-market assessment, whereas the Chinese version does not kick in before products or services are being introduced to consumers. Finally, the claim is made that China and Europe will dominate the way AI is policed, but that these approaches will be quite different and there is a risk of separating researchers into different jurisdictions.
CMSWire published an overview of the AI Act, focusing on the impact for marketers. It lists the following AI use cases as unacceptable risk and that will be banned: manipulation through subliminal techniques; exploitation of specific vulnerable groups, such as children; social scoring done by public authorities (like China’s social credit system); and real-time remote biometric identification in public spaces by law enforcement (with exemptions). CMSWire also highlight that the Act establishes transparency obligations for non-high-risk systems that interact with humans, detect emotions or categorise based on biometric data, and generate or manipulate content. The Act faces both criticisms and compliments from the perspective of marketers. There is a claim that oversight expectations are too broad and do not differentiate between different domains of use such as platforms, infrastructure, and market. It is argued in the article that the lack of the right to complain for individuals and allowing for collective remedy stand in contrast with the GDPR provisions. It's important to note, however, that the latest Parliament amendments indicate that a complaint procedure for individuals will likely be introduced. On the other hand, the article emphasises the positive that the Act can compel companies to conduct rigorous assessments before releasing AI systems to the market.
Silicon Republic published a discussion of the AI Act's treatment of facial recognition technologies. The piece asserts that one issue with many facial recognition technologies is that they are being developed by large private companies capable of identifying loopholes in regulation. It gives the example of companies claiming that their CCTV cameras do not use facial recognition. However, this hides the full picture, since for the most part facial recognition is only performed once the image has been sent to the server. In addition, according to the piece, the AI Act's phrasing of the enforcement framework is concerning because it's neither clear nor strong enough; a case in point being Clearview AI whose multinational nature makes it difficult for regulators to enforce existing laws against the company.
The Thomson Reuters Foundation produced an opinion article arguing the AI Act needs to better protect refugees against high-risk border technologies. The point is made that border technologies hurt people, with most of the impact felt by under-resourced and marginalised communities. The AI Act presents an opportunity to meaningfully address the technologies tested and deployed at Europe’s borders. The authors argue that in order to do so, the Act should ban the use of personal data to profile refugees, AI lie detectors, remote biometric identification and categorisation in public spaces. Furthermore, it should include a range of technologies for surveillance in the high-risk category as well as set up stronger oversight and accountability measures for when human rights of mobility and asylum are at risk.
The MIT Technology Review published an overview of the current AI Act proposal. The US is interested in the EU AI Act for two reasons. First, US companies that are made to comply with the regulation through their engagement with the EU market are expected to raise standards for the US market as well. Second, the Biden administration is keen to keep Europe as an ally in the geopolitics of AI and the AI Act could represent a template for the safeguarding of democratic values. Nonetheless, the op-ed raises some points of concern in the AI Act. In the initial draft, the requirements of error-free datasets and fully understandable AI systems are technically unfeasible. It is important to clarify that there is a broad consensus that this needs to change, which can be seen in the latest Parliament and Council proposals. The article continues that potential threats to IP rights are another concern to businesses. And last but not least, the EU Member States are afraid of losing their sovereignty in national security matters.
A coalition of human rights organisations published proposed amendments to the AI Act regarding border and migration control. They developed an extensive report examining the current development and deployment of AI systems by EU institutions and member states for asylum, border and migration control purposes. The report argues that some of the major failures of the AI Act with regard to migration and border control are 1) no references to international obligations regarding migration and international protection, 2) the use of AI for individual risk assessments or profiling is not adequately considered, and 3) predictive analytics systems for migration, asylum and border control management are not included as high-risk.