Welcome to the EU AI Act Newsletter, a brief biweekly newsletter by the Future of Life Institute providing you with up-to-date developments and analyses of the proposed EU artificial intelligence law.
Legislative Process
The IMCO-LIBE joint committee at the European Parliament held a hearing where they addressed the main issues concerning the AI Act. Experts were invited to give short presentations and answer questions in relation to the scope and overall purpose of the AI Act as well as the risk assessment and handling of high-risk AI systems. The hearing can be re-watched online here. The Future of Life Institute made a short video of the hearing:
Science|Business covered what was discussed by experts during the Parliament hearing with regard to general purpose AI systems. Max Tegmark, physics and AI specialist, claimed that the AI Act in its current draft is not future proof enough because of excluding general purpose AI systems. Stuart Russell, professor of computer science, stated that it makes sense to assess the accuracy, fairness, etc, of general purpose AI systems at the source, meaning require providers to carry out conformity assessment rather than smaller European integrators.
Analyses
University of Oxford researchers developed capAI, a conformity assessment procedure for AI systems, to provide an assessment of AI systems to comply with the proposed AI Act regulation. The purpose of this tool is to ensure and demonstrate that an AI system is trustworthy and conforms to the regulatory requirements. The procedure has three components: an internal review protocol for quality assurance and risk management; a summary datasheet to be submitted to the EU’s future public database on high-risk AI systems in operation; and an external scorecard, which can be made available to customers and other stakeholders of the AI system.
ALLAI, an organisation focused on responsible AI, recently published two policy papers. One on the objective, scope and definitions in the AI Act, and the other on prohibited AI practices. In the first paper, they assess whether Chapter I reflects the overall objective of the Act, which is to protect health, safety and fundamental rights and support innovation. In the second paper, they evaluate each prohibited practice and the scope of these prohibitions as well as in relation to other legislation.
The View published an op-ed about how Hong Kong and mainland China can emulate the AI Act to build trust and confidence among users of AI systems. It states that the EU is looking to set a gold standard for trustworthy AI and Hong Kong needs to move beyond high-level AI ethics and governance principles.
The European DIGITAL SME Alliance developed a factsheet outlining key concerns voiced by SME experts about the AI Act as well as proposals for improvement. SMEs are concerned about innovation and over-regulation, a broad AI definition, real risks being inadequately addressed, SMEs being underrepresented in standardisation institutions, and more. They recommend not regulating SMEs in some industries and changing the AI definition to exclude optimisation methods but include approaches that would capture future, more powerful AI.
European Center for Not-for-Profit Law has developed and submitted several amendments on the AI Act. Their proposals aim to strengthen the legal framework of the Act, remove some of its inconsistencies, ensure meaningful stakeholder engagement in AI governance and support a flexible, rights-and-risk based approach.
Tech.eu argued that the AI Act lacks provisions that will protect the democratic process from AI-driven manipulations. According to their op-ed, AI manipulation of voters is already happening and affects access to unbiased information. Both the original draft and the latest Slovenian and French presidencies fail to include uses of AI that jeopardise democratic processes as an unacceptable risk.