The EU AI Act Newsletter #69: Big Tech and AI Standards
Corporate Europe Observatory has published an overview arguing that European AI standard-setting bodies are heavily dominated by tech industry representatives.
Welcome to the EU AI Act Newsletter, a brief biweekly newsletter by the Future of Life Institute providing you with up-to-date developments and analyses of the EU artificial intelligence law.
Analysis
Feedback on the second Code of Practice draft: Leverhulme Centre for Future Intelligence academics reviewed the second draft of the Code of Practice for General-Purpose AI and acknowledged significant improvements from the first version. The authors praise the balanced approach between Commitments, Measures and KPIs in establishing governance for general-purpose AI models. Their recommendations for the new draft focus on the following areas: increased attention to inference-time considerations, development of a more adaptable tiered system of obligations, stronger external assessment requirements and refinement of the framework for capabilities, propensities and context. The document also suggests methodological improvements to risk assessment, including changes to the taxonomy and more thorough AI evaluations of risk sources, alongside complementary capability-based evaluations with concrete outcome metrics and vulnerability assessments. Specific recommendations include resolving inconsistencies in terminology around systemic risk and high-end capabilities, implementing a flowchart-based system for risk assessment, and clarifying obligations for model modifications. Additional suggestions address notification requirements for training runs and the relationship between capability, propensity and context in risk assessment.
Microsoft innovating in line with the AI Act: Natasha Crampton, Microsoft Chief Responsible AI Officer, outlined Microsoft's commitment to help customers both innovate with AI and comply with the AI Act. The company is adapting its products and services to meet regulatory requirements, and engaging with European policymakers to support effective implementation practices. Microsoft emphasises its contractual commitment to regulatory compliance across all jurisdictions where its customers operate. The company employs various risk management practices, including impact assessments and red-teaming, particularly for high-risk models through their Sensitive Uses programme. Their risk mitigation involves systematic measurement and tools such as Azure AI Content Safety. The company's Responsible AI Standard was developed with consideration of early AI Act drafts and their cross-functional teams are working to align internal standards with the final legislation. Ahead of February's implementation of prohibited practices provisions, Microsoft takes the following approach: review existing Microsoft systems, create new restricted uses policy, and update their contracts. The company acknowledges its role in supporting downstream customers' compliance with high-risk AI system regulation.
How Big Tech sets its own AI standards: Corporate Europe Observatory has published an overview arguing that European AI standard-setting bodies are heavily dominated by tech industry representatives. The investigation shows how major corporations like Oracle, Microsoft, Amazon, Huawei, IBM and Google are actively involved in developing the harmonised standards required by the EU AI Act. The research, based on expert interviews, social media analysis and public documents, demonstrates how private standard-setting organisations are creating rules that will have legal status in the EU, behind closed doors. The article claims that these industry-led forums are effectively defining what constitutes 'trustworthy' AI. While tech companies present standard-setting as a purely technical process, the investigation argues it is highly political, involving crucial decisions about bias, fairness and fundamental rights. The report highlights that independent experts and civil society organisations are significantly outnumbered and under-resourced compared to corporate representatives. This imbalance raises concerns about the potential for industry-friendly standards to weaken the Act's protections.
AI literacy in the AI Act: Credo AI participated in the EU AI Office's first AI literacy workshop in December 2024, under the AI Pact. They emphasised the need for tailored literacy programmes that assess current AI literacy knowledge levels, combine basic AI concepts with sector-specific applications, and provide continuous learning opportunities. The Act's AI literacy requirement becomes effective on 2 February. By this date, organisations providing or deploying general-use AI systems must implement adequate literacy measures for their staff through training or reporting mechanisms. The Act defines AI literacy as skills, knowledge and understanding that enable stakeholders to make informed decisions. This encompasses both technical comprehension and awareness of AI's broader societal impact. For organisations, AI literacy underpins regulatory compliance and alignment with ethical standards and expectations. For individuals, it serves as an empowerment tool, enabling them to protect their rights, exercise democratic control and make informed decisions when using AI systems, while fostering both trust and critical awareness of AI capabilities.
Civil society statement as prohibitions deadline looms: Civil society organisations have issued a joint statement critiquing the European Commission's approach to drafting guidelines for AI Act prohibitions. The statement emphasises the need to prioritise human rights and justice in the implementation process. Prohibitions on "unacceptable risk" AI systems take effect from 2 February, but the Commission has not yet provided interpretation guidelines. Stakeholders have expressed dissatisfaction with a December consultation, noting its lateness and the absence of a draft document despite one existing internally in the Commission. The organisations call for the inclusion of problematic practices in the provisions to address perceived gaps in the AI Act. They also advocate for "simple" systems to be classified as AI, expressing concern that developers might circumvent obligations by simplifying systems whilst maintaining their functionality. The statement was endorsed by 21 civil society organisations and four professors, including Access Now, Amnesty International and European Digital Rights (EDRi).
Zuckerberg urges Trump to stop the EU from fining US tech companies: Meta CEO Mark Zuckerberg has called for the incoming Trump administration to protect US tech companies from EU fines for regulatory violations. Speaking on the Joe Rogan Experience podcast, he argued that America's dominant tech industry provides strategic advantages that should be defended. Zuckerberg highlighted that EU authorities have imposed over $30 billion in penalties on US tech companies in the past 20 years, including a recent €797 million fine on Meta for antitrust violations related to advertising services. He compared the EU's application of competition rules to a tariff on American companies. He criticised the outgoing Biden administration's approach for failing to address the situation effectively. According to Zuckerberg, while the US government would typically pressure countries that interfered with important American industries, in this case, the US government's actions had enabled the EU to take unrestricted action against American companies.