Welcome to the EU AI Act Newsletter, a brief biweekly newsletter by the Future of Life Institute providing you with up-to-date developments and analyses of the proposed EU artificial intelligence law.
Legislative Process
EURACTIV's latest overview helps to navigate the thousands of amendments proposed in the European Parliament, while the public is still waiting for the amendments to be made available. According to EURACTIV, the definition of AI is still one of the main areas of debate, with MEP Benifei proposing a broad definition while the European People’s Party (EPP) insists on the OECD definition. MEP Tudorache introduced a new article to capture AI applications in the metaverse virtual environment. He also joined the social democrats and greens in the ban on biometric recognition. Right and left MEPs disagree on the fines with the former proposing lower fines, an exemption for SMEs, and adding considerations to the calculation, whereas the latter propose to increase fines and remove the criteria of considering business size and market share.
To our knowledge, the only recent amendments in the European Parliament that have been made public are the ones proposed by the EPP led by MEPs Axel Voss, Deirdre Clune and Eva Maydell. Some of the proposals to highlight are the guidance of national benchmarking authorities on measuring accuracy and robustness and the suggestion that original providers of general purpose AI systems should abide by tailored obligations even before such systems are placed on the market or put into service. Lastly, European standardisation organisations are recommended to aim to fulfil various objectives such as promoting investment and innovation in AI as well as enhancing the representation of all relevant European stakeholders.
EURACTIV summarised a draft standardisation request related to the implementation of the AI Act. The European Committee for Standardisation (CEN), the European Committee for Electrotechnical Standardisation (CENELEC), and the European Telecommunications Standards Institute (ETSI) will develop the technical standards for the AI Act. The organisations will propose a work programme including the timeline and responsibilities, and submit periodical reports. According to the summary, technical standards will be pivotal for the AI Act implementation as they will signal the conformity of AI systems with EU regulation and bring down costs of compliance. The three standardisation organisations would have to submit their joint final report by 31 October 2024.
Analyses
The Brookings Institution produced a report focusing on the Brussels effect of the AI Act, meaning whether this regulation will have a broad global impact. According to the report, there will be three main ways the AI Act will have a global impact. First, for AI systems in regulated products companies will have to make changes to their conformity assessment procedures to pay special attention to AI systems. Second, high-risk AI systems will be highly influenced if they are built into online or otherwise internationally interconnected platforms. Brookings considers LinkedIn, as an entirely interconnected platform with no geographic barriers, a useful example on this. Third, on AI interacting with humans the effect may be widespread as the transparency requirements are likely to be considered a trivial change by companies abroad.
International Technology Law Association published a green paper on the AI Act as a result of a collaboration between law firms across international jurisdictions. The paper argues that the AI Act is not a holistic piece of legislation because it relies on pre-existing laws such as the GDPR and will need other new laws to function such as a measure to enable effective AI liability mechanisms. The paper also discusses that the EU product-safety regime is unfavorable to a subjective harm or outcomes-based approach because the EU approach will be limited to specific use cases and classes of AI systems. They claim that EU could instead mandate a set of core ethical principles to underpin the operation of all EU-wide AI systems.
Trilateral Research published an overview on the relation between the AI Act and GDPR, focusing on human oversight. The GDPR forbids relying on solely automated processing for decision making with a few exceptions. The AI Act, however, simply sets a general obligation for high-risk AI systems to be designed and deployed in a way that can be effectively overseen by natural persons. According to Trilateral Research, in doing so, the AI Act fails to identify and regulate mechanisms to effectively implement human oversight because it does not specify when and where humans shall have the final word on the decision. High-risk AI systems are still likely to fall under the GDPR because they use personal data thereby requiring their users and providers to take additional measures for human oversight.
The Brookings Institution published a policy brief on the AI Act focusing on its future steps and implication on the governance of AI on a global level. The policy brief points out that the foundational questions of the AI Act influencing international cooperation on AI regulation are the definition of AI and the scope of the risk-based approach. How the AI Act is enforced is also considered critical. Many are offering definitions of AI but the one chosen by the EU is likely to become a reference for other AI regulations in the world. Many countries endorse the risk-based approach, however, different approaches to risk assessment and management can lead to costs to AI development and use. On the enforcement side, the mechanisms for risk assessment and conformity assessment in the AI Act will likely lead the way to a global approach to assessing conformity, while inconsistencies across sectors and EU member states are likely to hinder mutual agreements.
EURACTIV published an op-ed focusing on the risks that would arise without adequate regulation of general purpose AI systems in the AI Act. The authors (one of whom is me) argue that the share of responsibilities between different actors set forth by the AI Act is pivotal for it to be future-proof. The ability of these systems to perform a wide range of functions is likely to make such systems be the future of AI. At the same time, assigning all of the provider responsibilities to users which are European SMEs is not effective. On the contrary, developers of general purpose AI systems (mostly large companies based outside the EU) have the economic and other resources to make such systems abide by the requirements in the AI Act. Additionally, because of their role in the value chain, assigning most of the responsibility to developers would ensure that general purpose AI systems are compliant with the AI Act across use cases.