The EU AI Act Newsletter #21
Welcome to the EU AI Act Newsletter, a brief biweekly newsletter by the Future of Life Institute providing you with up-to-date developments and analyses of the proposed EU artificial intelligence law.
According to EURACTIV's Luca Bertuzzi, the European Parliament’s co-rapporteurs, Brando Benifei and Dragoș Tudorache, circulated new compromise amendments on 9 January focusing on fundamental rights impact assessments and obligations for users of high-risk systems. One of the main proposals is to include a requirement for all users of high-risk AI systems to carry out a fundamental rights impact assessment. It would include elements such as the intended purpose and geographic scope of use, who would be affected, specific risks to marginalised groups, and foreseeable environmental impact. Additionally, users would be required to have robustness and cybersecurity measures in place, regularly updated, and human oversight in all instances required by the AI regulation. A paragraph has also been added to address generative AI, such as ChatGPT, that requires users to disclose that text is generated by AI.
EURACTIV also summarised the position of Germany on the AI Act, including on biometric recognition, predictive policing, emotion recognition, law enforcement, AI in the workplace, and high-risk classification. Bertuzzi writes that Germany is in favour of a total ban on biometric recognition technology, which was previously mentioned in the coalition agreement signed by the three governing parties in 2021. Germany also favours banning real-time biometric identification in public spaces while allowing ex-post identification. Furthermore, they advocate prohibiting any AI application that substitutes human judges in legal assessments of an individual's risk of committing a crime, or repeat offending.
Ben Wodecki of IoT World Today reported that the AI Act may be voted on in the European Parliament by March, according to Laura Caroli, a parliamentary assistant currently leading negotiations on the Act at the technical level. Caroli predicts that the regulation will be approved by the end of 2023 and come intro force two years later. The AI Act has already been approved by the Council of the European Union, but to be transposed into law it needs to pass through both institutions. The bill has been delayed in the Parliament by MEPs arguing over provisions on biometric identification systems.
Melissa Heikkilä wrote in MIT Technology Review about European lawmakers working on rules for generative image- and text-producing AI models. Heikkilä explains that the EU calls these generative models “general-purpose AI” systems because they can be used for many different things. These models increasingly form the foundation of many AI applications, yet the companies that make them are very secretive about how they are built and trained. It is difficult to pinpoint how exactly the models generate harmful content or biased outcomes, or how to mitigate those problems. Heikkilä states that the exact way in which these models will be regulated in the AI Act is still under debate, but that in any case creators of general–purpose AI models will likely need to be more open about how their models are built and trained.
MIT Technology Review's Melissa Heikkilä also wrote about four big trends she expects to shape the AI landscape in 2023, alongside her colleague Will Douglas Heaven. Heikkilä lists new laws and regulations around the world aimed at AI use and development as one of the big trends. She states that the final version of the EU’s AI Act may be finished by the summer of 2023 and will likely include bans on AI practices deemed detrimental to human rights, such as systems that score and rank people based on trustworthiness. She continues that the use of facial recognition in public areas by law enforcement will be restricted, and there is a movement to completely ban it for both law enforcement and private companies. However, this move will likely face opposition from countries that wish to use the technology to combat crime.
Science Business published a summary of the views of European AI startups on the AI Act. Writer Ian Mundell notes, firstly, that the Act is expected to have a greater impact on startups than initially anticipated, as it could impose additional responsibilities and costs on a wide range of companies, which may make them less attractive to investors. A minority of startups see the regulation as having no effect or even being positive for investments due to providing a basic guideline for developing responsible AI. Secondly, proposals to regulate general-purpose AI are seen by some to defy the logic of the risk-based approach of the AI Act, and uncertainties remain about how open source solutions will be affected by these rules. In addition, Mundell raises concerns over the Council's proposal that details of compliance assessment for general-purpose AI should be worked out by the European Commission after the Act enters into force.
The IAPP summarised an event which explored how AI regulatory sandboxes are helping companies develop their machine-learning models. Policy makers are still debating the usefulness of regulatory sandboxes within the AI Act. Some believe they are vital for entities to test their machine-learning algorithms; others argue that companies may be resistant to allowing national data protection authorities access to their systems if they are not granted leniency from potential EU General Data Protection Regulation violations while their algorithms are tested in sandboxes.
Thanks for reading The EU AI Act Newsletter! Subscribe for free to receive new posts and support my work.