The EU AI Act Newsletter #85: Concerns Over Chatbots and Relationships
EU regulation currently lacks clarity on the extent to which AI chatbots are allowed to encourage engagement through intimacy.
Welcome to the EU AI Act Newsletter, a brief biweekly newsletter by the Future of Life Institute providing you with up-to-date developments and analyses of the EU artificial intelligence law.
Legislative Process
Poland has yet to designate the market surveillance authority: Poland’s delay in appointing a market‑surveillance authority under the AI Act could trigger infringement proceedings, warns legal expert Maria Dymitruk. The Act, which came into force on 1 August 2024, sets out staggered implementation dates, and all Member States were required to inform Brussels of their designated watchdogs by 2 August 2025, according to the Polish state news agency PAP. Despite the absence of an appointed body, Dymitruk stresses that Polish firms must already comply with the regulation wherever its obligations apply, and breaches are punishable. She cautions that the gap creates uncertainty, as some market participants might mistakenly believe the rules are not yet binding. Deputy Prime Minister and Digital Affairs Minister Krzysztof Gawkowski downplays the risk of EU action, arguing that Poland is helping shape the implementation timetable and noting that many other Member States are still finalising their arrangements. He added that the government “will do everything to implement the Act quickly and well.” Under the draft Polish Artificial Intelligence Systems Act, the Commission for the Development and Safety of Artificial Intelligence (KRiBSI) would serve as the market‑surveillance authority.
Analyses
Possible regulatory gap by the AI Act: Maximilian Henning from Euractiv reported that EU regulation does not clearly define the extent to which AI chatbots can foster intimate user attachments. Sam Altman, OpenAI’s chief executive, estimates that fewer than one percent of ChatGPT’s hundreds of millions of users develop an “unhealthy relationship” with the service, but this could still potentially be millions of individuals. While the AI Act bans “purposefully manipulative or deceptive techniques” when they are likely to cause “significant harm”, developers could argue that occasional emotional bonds do not meet this threshold, and no enforcement actions have yet materialised. Other EU laws may intervene. The Unfair Commercial Practices Directive (UCPD) bans practices that distort consumer decision‑making, and the Digital Services Act (DSA) prohibits interfaces that deceive or manipulate users, both of which could apply to these chatbots. Yet, as BEUC’s Urs Buscke notes, these rules target interface design rather than conversational content, creating interpretive uncertainty. Future legislation, such as the proposed Digital Fairness Act, aims to curb “dark patterns” and addictive design, but experts warn that policymakers still lack a clear grasp of the risks posed by emotionally manipulative AI, underscoring the need for dedicated regulation.
Concerns over chatbots and relationships: Pieter Haeck from POLITICO wrote that AI‑driven companions, which are always on‑hand and non‑judgemental, are raising alarms among experts and EU regulators. While they may reshape notions of friendship, their rise mirrors earlier digital disruptions – from social‑media platforms to dating apps – where regulation lagged behind adoption, leaving vulnerable users exposed. Incidents linking AI companions to suicides and assassination plots have intensified calls for oversight. Under the AI Act, chatbots must disclose they are artificial, but beyond that the obligations for AI companions remain vague. The Act’s risk‑based framework classifies certain practices as “unacceptable” (e.g., subliminal manipulation) and earmarks high‑risk status for systems affecting health, safety or fundamental rights from August 2026 onward. Lawmakers, led by Dutch Green MEP Kim van Sparrentak, are pushing to explicitly classify AI companions as high‑risk, triggering fundamental rights assessments. Critics argue the current regime focuses on functional harms and overlooks emotional ones, making effective regulation of AI‑mediated relationships inherently challenging.
Inside Europe’s AI strategy with EU AI Office Director Lucilla Sioli: Laura Caroli, Senior Fellow at the Wadhwani AI Center at CSIS, interviewed Lucilla Sioli. Dr Sioli explains that the AI Act paved the way for an AI Office within the European Commission’s DG Connect, created about a year ago to steer research, innovation and the AI supply chain while overseeing the Act’s implementation. The Office pursues two intertwined goals: fostering AI‑driven growth for Europe’s economy and society, and building trust through the “trustworthy AI” rules that form part of the broader innovation policy. It coordinates several units that shape policy, fund research and develop the AI Continental Action Plan, and it engages internationally, with particular attention to technology transfer to the Global South. Regarding the Act’s rollout, high‑risk and transparency obligations are planned for 2026, contingent on standards being finalised by CEN and CENELEC. The Commission is currently assessing whether to postpone these dates. When asked about US criticism that Europe over‑regulates, Sioli stresses that the risk‑based approach targets only the roughly ten percent of AI applications that pose significant societal risk, leaves research and development untouched, and replaces a potential patchwork of 27 national regimes with a single set of rules.
Jobs
Administrative job at the AI Office: The AI Safety Unit of the EU AI Office is recruiting an Assistant to the Head of Unit, with application due by 15 September 2025. Unit A3 is central to applying and enforcing rules on general‑purpose AI models: it devises testing protocols, evaluates model capabilities and risks, liaises with major AI providers on compliance, and represents the EU in international AI‑safety programmes. The unit also leads in the design of model cards, monitors adherence to the code of practice, prepares Commission decisions requesting documentation, and investigates possible breaches, while working with the scientific panel and responding to alerts on systemic risks from frontier AI. The post seeks a well-organised professional with strong administrative skills and attention to detail. Responsibilities include managing calendars, events, contacts as well as the Head of Unit's correspondence, ensuring information flow, coordinating deadlines, handling filing documents related to enforcement, supporting personnel onboarding, budgeting and meeting logistics. Candidates must be fluent in English, be proficient with Outlook, Word, Excel, SharePoint and Teams, and be familiar with EU procedures.