Welcome to the EU AI Act Newsletter, a brief biweekly newsletter by the Future of Life Institute providing you with up-to-date developments and analyses of the proposed EU artificial intelligence law.
Legislative Process
EURACTIV reported that a new partial compromise on the AI Act by the Czech presidency suggests that an AI system would qualify as high-risk only if it has a major impact on decision-making. The author explains that the key idea behind this suggestion is to create more legal certainty and prevent AI applications that do not play a major role in the final decision from falling under the scope. In addition, the provision to consider AI systems high-risk if they take decisions without human oversight has been removed. Furthermore, whenever the list of high-risk applications is updated, potential benefits of the AI for individuals or society at large need to be considered not just the potential for harm. Finally, according to the article, this document will be the basis for a technical discussion at the Telecom Working Party meeting on 22 September.
The Committee on Legal Affairs (JURI) at the European Parliament adopted their opinion on the AI Act. The committee introduces several exemptions, such as on research, testing and development; on business-to-business; and on open-source until its commercialisation. In addition, the committee specified under what circumstances responsibilities in the value change might shift to another actor (Article 23a) and integrated general-purpose AI systems into the AI Act. Finally, they recommend the AI Board to be a powerful EU body with its own legal personality and strong involvement of stakeholders.
According to EURACTIV, the rapporteurs Brando Benifei and Dragoș Tudorache have started to address regulatory sandboxes in the latest compromise text. The article begins by explaining that so far the European Parliament has tried to avoid sensitive articles and because of that the most controversial amendments on sandboxes will also be delayed to later discussions. That said, in the compromise text, it is stated that the member states must establish at least one AI regulatory sandbox each and it should be operational when the regulation enters into application. Some of the other topics discussed are having sufficient resources for the sandboxes, the possibility of setting up sandboxes at the regional or local level or jointly with other countries, and a list of objectives for the sandboxes.
Analyses
VentureBeat summarised a panel discussion by the Center for Data Innovation about the inclusion of general-purpose AI (GPAI) systems in the AI Act. The first thing the event addressed was reasons to include GPAI in the Act. There are several reasons why the European Parliament is seeking to address and define GPAI systems. Firstly, there is a fear of all these technologies falling under the umbrella term of AI and looking very different in 5-10 years. Secondly, GPAI systems could be dominated by big tech companies which means having competition implications. Thirdly, these systems are not only complex technologically but also involve several market players for a complicated value chain. Another big topic discussed at the event was how to define these systems. One of the key characteristics mentioned was that these systems can perform and learn certain tasks, including those for which they weren’t originally intended, designed, or trained. In addition, they have broader scale, size, parameters and datasets, and are able to take a wider set of tasks and activities.
Centre for European Policy Studies published a report mapping the gaps and limitations of the AI Act in relation to 14 other laws. The report makes the following recommendations in relation to eight areas: 1) clarify and align the terminology with existing EU legislation; 2) ensure better fine-tuning of the interactions of the act with sector-specific rules; 3) increase consistency with EU data protection rules; 4) address a number of loopholes to improve legal certainty for AI providers and users; 5) provide more detailed provisions to allow for meaningful integration with existing product safety rules; 6) strengthen the enforcement scheme by aligning with other digital policies; 7) tackle the growing divergence between the stated goals of the act and emerging data transfer rules; and 8) offer exemptions aimed at promoting scientific research.
VentureBeat wrote another article on the debate around the possible regulation of open-source general-purpose AI (GPAI) systems. One of the viewpoints highlighted is that “it would create legal liability for open-source GPAI models, undermining their development” and because of that “further concentrate power over the future of AI in large technology companies” and prevent critical research. Another opposing viewpoint expressed is, however, that regulation is needed to direct the innovation away from exploitative, harmful, and unsustainable practices. It is expressed that the only people in a position to actually thoroughly document training data are those who collect it. In addition, there could be enough danger in creating collections of data and models trained on those that open-source developers should not have free rein.
TechCrunch covered the discussion of whether the proposed EU rules could limit the type of research that produces cutting-edge AI tools like GPT-3. According to the piece, some experts worry that the AI Act would impose onerous requirements on open efforts to develop AI systems. The article continues that even with carve-outs for some categories of open-source AI, like those exclusively used for research and with controls to prevent misuse, it can be very difficult to prevent these projects from making their way into commercial systems, where they could be abused by malicious actors. A recent example is Stable Diffusion, an open-source AI system that generates images from text prompts, which was released with a license prohibiting certain types of content, but was still used by some to create pornographic deepfakes of celebrities. Some other experts, however, think that the fearmongering about ‘stifling innovation’ comes mostly from people who want to do away with all regulation and have free rein. According to this view, regulating open-source systems more heavily might show leadership globally and encourage others to follow suit.