Welcome to the EU AI Act Newsletter, a brief biweekly newsletter by the Future of Life Institute providing you with up-to-date developments and analyses of the proposed EU artificial intelligence law.
Legislative Process
According to EURACTIV, the European Parliament’s rapporteurs’ latest compromise amendments focused on the redesigning of the enforcement structure of the AI Act. EURACTIV reports that the text gives the national supervisory authorities power to conduct unannounced on-site and remote inspections of high-risk AI, as well as ask for evidence to identify non-compliance with the regulation. The scope of the EU high-risk database is increased from only standalone AI systems to all systems, including those integrated with others in a complex system. For post-market monitoring of high-risk AI systems, the providers will have to analyse the AI environment continuously. Individuals and groups affected by an AI system will have rights to complain to a relevant national authority and to be informed about their complaint's progress.
The Czech Presidency has now prepared the final version of the compromise text, in view of submitting it to the upcoming TTE (Telecommunications) Council on 6 December for a possible general approach. Some of the main elements of the compromise proposal are the definition of an AI system, prohibited AI practices, a list of high-risk AI use cases in Annex III, the classification of AI systems as high risk, general purpose AI systems, the scope of the AI Act, transparency and innovation.
Analyses
EURACTIV summarised an event where the AI Act was discussed from the perspective of SMEs. One of the main points made was that the impact assessment by the Commission has underestimated the potential compliance costs. In addition, it was stated that the EU is risking handing the market to the most prominent players with this legislation. This was countered by arguing that while the administrative burden weights differently depending on the size of the AI provider, the risks are all the same. Furthermore, another expert did not see an alternative to doing market conformity tests or certifications before the products enter the market, as otherwise nobody could trust these products. Finally, some ideas were presented to optimise compliance costs, such as distinguishing between SMEs that use AI in their end products as opposed to their internal products, and using technical standards for compliance.
Science Business published an article about the possibility of the EU and the US creating a common space for trustworthy AI with a single set of rules followed by companies around the world. The author states, however, that Brussels and Washington have taken very different approaches so far. The EU is currently formulating the binding legislation AI Act, whereas the US is working on voluntary guidelines by NIST and non-binding principles in the case of the AI Bill of Rights. The EU Commissioner Margrethe Vestager thinks that it is possible to align the EU AI Act and the US AI Bill of Rights; other stakeholders are sceptical, or at least think that it will take several years to do this.
Intellera Consulting published a report reviewing the assumptions of the European Commission for compliance cost calculation for the AI Act and argues that the original calculation does not provide a specific approach to estimating costs for SMEs. Using a case study of an SME with total revenues of 23.2 million euros, 15% of which is spent on AI R&D, and having 150 employees, 50 of whom are developers, the report estimates compliance costs of the AI Act in three scenarios with varying assumptions. Scenario 1 shows total costs of about 4 million euros or 17.3% of total revenues, which would be a barrier to market entry. Scenario 2 shows total costs of about 611,000 euros or 2.7% of total revenues, which would be more feasible for SMEs. Scenario 3 shows total costs of 301,200 euros or 1.3% of total revenues, which would increase the likelihood of SMEs adopting AI technology.
Access Now published a report on human rights impact assessments for AI, including the relationship with the AI Act. Human rights impact assessments are generally used to evaluate the potential or actual impact of a strategy, practice, or product on human rights. The authors mention the principle of non-discrimination as an example of a human right, which could be operationalised in the context of the development and deployment of AI systems. This could be done by identifying who may be affected by a business activity, and with what adverse human rights impacts, among other aspects. The report suggests that to strengthen accountability for the protection of human rights, the AI Act could revise its risk management requirement to mandate the assessment of human rights risks.
The European Center for Not-for-Profit Law published an opinion on the dangers of excluding AI used for military and national security from the AI Act and other binding European instruments. The author emphasises that while this exclusion is partly due to national security being outside the EU's competences, the proposed exemptions in the AI Act go too far. The author argues that whenever an EU member state exercises its exclusive competence in relation to national security to impose obligations on entities that are subject to EU law, those obligations must be compatible with the relevant EU law. Therefore, they suggest that the AI Act should be drafted in a way that prevents circumvention of EU law in the context of military and national security.