Welcome to the EU AI Act Newsletter, a brief biweekly newsletter by the Future of Life Institute providing you with up-to-date developments and analyses of the proposed EU artificial intelligence law.
Legislative Process
POLITICO's Morning Tech reported (unfortunately, behind a paywall) that these last weeks are expected to be crucial for the AI Act as discussions on the most difficult topics are now taking place, including definitions and high-risk AI practices. In the amendments circulated, the definition of risk is being defined as “the combination of the probability of an occurrence of a hazard causing harm and the degree of severity of that harm”, but the definition of AI has been "parked". Elsewhere, a new section has been added to the list of high-risk AI systems to include AI systems that generate complex text or audio and video content, and those that deploy subliminal techniques for scientific research or for therapeutical purposes.
According to EURACTIV's Luca Bertuzzi, the European Parliament's co-rapporteurs are working to finalise the negotiations on the AI Act in the coming days. The pending issues they are trying to resolve include the list of AI uses that pose significant risks, the prohibited practices, and the definitions of key concepts used in the draft law. In a compromise version of Annex III, the co-rapporteurs have added a new category for high-risk AI systems meant to be used by vulnerable groups, specifically those that could be used by children in a way that may seriously affect personal development. Going by the prohibited practices amendments, the rapporteurs are proposing to expand the ban on social scoring to include groups, not only individuals, and the ban on AI-powered predictive policing models remains in place.
Analyses
Sabrina Küspert, Nicolas Moës and Connor Dunlop published an article on Ada Lovelace Institute's blog about the challenge for regulators to adequately address general-purpose AI (GPAI) in the EU AI Act through focusing on the complexity of the value chain. The authors state that GPAI models have a wide range of applications and are sometimes called foundation models as they serve as the building blocks for hundreds of single-purpose AI systems. They explain that GPAI models are usually made available to downstream developers via API or open-source access. In the case of API, the provider runs the system remotely on its servers, while open-source allows for the model or some elements of it to be publicly available for modification and distribution. The authors also argue that the choice between using an API or releasing an open-source version of a GPAI model has different implications for accountability. The API option may offer more safeguards against harmful content but centralises control in the hands of the most powerful actors, while open-source allows for more equitable access but could lead to the spread of biased outputs.
EURACTIV's Luca Bertuzzi also wrote an op-ed on the requirements in the AI Act relating to critical infrastructure. Bertuzzi describes the dynamics in the AI Act development process, where the original proposal considered as high-risk AI systems intended to be used as safety components in the management and operation of road traffic and the supply of water, gas, heating and electricity, whereas the Council tried to distinguish the safety component from the management system itself, and the Parliament has proposed to differentiate the management of traffic from supply networks. Finally, Bertuzzi discusses the benefits and risks of a broad scope for considering critical infrastructure as high-risk. Some are concerned that it could preclude useful tools that contribute to efficiency and security, while others see serious risks, such as large electricity blackouts.
Foo Yun Chee and Supantha Mukherjee reported in Reuters that EU industry chief Thierry Breton has stated that the AI Act rules will aim to tackle concerns around ChatGPT and related AI systems. Amongst these concerns are plagiarism, fraud, and spreading misinformation. The authors describe that according to the EU draft rules, ChatGPT is classified as a general purpose AI system that can be used for many purposes including high-risk ones such as the selection of job candidates and credit scoring. They also mention that Breton wants OpenAI to cooperate closely with downstream developers of high-risk AI systems to help their compliance with the AI Act. Finally, Breton stated that the European Commission is working closely with the Council and Parliament to clarify further the rules in the AI Act for general purpose AI systems.
Pablo Jiménez Arandia wrote on Algorithm Watch about Spain announcing the first national agency to supervise AI in relation to the AI Act and more broadly to oversee both private and public sector algorithms. The AI Act is expected to mandate member states to designate national authorities to monitor compliance with the regulation, and Spain is the first to attempt the establishment of such an institution. The agency's ability to veto and sanction the use of potentially harmful systems will be closely linked to the final version of the AI Act. Finally, the author highlights that civil society has so far been excluded from the participation in the design and strategy of the new agency.
The Publications Office of the European Union published a report on the AI standardisation landscape. As a reminder, once the final legal text of the AI Act comes into force, standards will play a fundamental role in supporting providers of concerned AI systems, bringing the necessary level of technical detail into the essential requirements prescribed in law. This report concludes that there are many concrete elements in existing IEEE standards that European standards developers might consider, including aspects related to AI bias, human oversight, record keeping, and risk management.
David Matthews at Science Business summarised how the recently published AI risk management framework by NIST in the US relates to the EU AI Act. Matthews mentions that the US has no plans for binding legislation; their risk management framework is voluntary for companies to follow. Despite this, there is hope that companies will still adopt this framework to limit their liabilities when sued due to AI errors. A positive of this compared to the AI Act is that the risk management framework could be implemented immediately, whereas the AI Act may still be a long way from completion. Another opportunity for collaboration is that the AI Act has requirements for risk management and the NIST framework could provide recommendations to achieve them.