{"id":108071,"date":"2025-03-26T13:51:29","date_gmt":"2025-03-26T17:51:29","guid":{"rendered":"https:\/\/cdt.org\/?post_type=insight&#038;p=108071"},"modified":"2025-03-27T03:34:16","modified_gmt":"2025-03-27T07:34:16","slug":"cdt-europes-ai-bulletin-march-2025","status":"publish","type":"insight","link":"https:\/\/cdt.org\/insights\/cdt-europes-ai-bulletin-march-2025\/","title":{"rendered":"CDT Europe\u2019s AI Bulletin: March 2025"},"content":{"rendered":"\n<p><em>Policymakers in Europe are hard at work on all things artificial intelligence, and CDT Europe is here with our monthly Artificial Intelligence Bulletin to keep you updated. We cover laws and policies that relate to AI, and their implications for Europe, fundamental rights, and democracy. To receive the AI Bulletin, <a href=\"https:\/\/cdt.org\/email-signup\/\"><strong>you can sign up here<\/strong><\/a><\/em>.<\/p>\n\n\n\n<p><strong>Third GPAI Code of Practice Draft Excludes Discrimination&nbsp;<\/strong><\/p>\n\n\n\n<p>The third version of the General-Purpose AI (GPAI) Code of Practice \u2013 and the last to be put to multistakeholder consultation \u2013 <a href=\"https:\/\/digital-strategy.ec.europa.eu\/en\/library\/third-draft-general-purpose-ai-code-practice-published-written-independent-experts\">was published<\/a> on 11 March, alongside a <a href=\"https:\/\/digital-strategy.ec.europa.eu\/en\/faqs\/general-purpose-ai-models-ai-act-questions-answers\">FAQ page<\/a> on the Code of Practice. The draft is now split into four parts dealing with commitments, transparency, copyright, and safety and security, respectively. The latter section addresses obligations relative to risk assessment and mitigations, and has been subject to significant changes that regress the draft\u2019s fundamental rights protections.&nbsp;<\/p>\n\n\n\n<p>As we covered in our <a href=\"https:\/\/cdt.org\/insights\/cdt-europe-statement-on-the-third-general-purpose-ai-code-of-practice-draft\/\">initial reaction<\/a> to the draft, the list of risks that are mandatory to assess \u2014 also known as the \u201cselected\u201d systemic risks taxonomy \u2014 now excludes discrimination and is largely focussed on existential risks. Discrimination was moved to the list of risks that are optional to assess, joining other risks to fundamental rights such as privacy harms and increased spread of child sexual abuse material or non-consensual intimate imagery. The draft cautions GPAI model providers to only assess these risks when they are specific to models\u2019 high-impact capabilities.&nbsp;<\/p>\n\n\n\n<p>As we addressed in <a href=\"https:\/\/cdt.org\/insights\/third-draft-of-the-general-purpose-ai-code-of-practice-misses-the-mark-on-fundamental-rights\/\">fuller comments<\/a> to the draft, the explanations given \u2013 that fundamental rights risks don\u2019t arise from high-impact capabilities, and that the EU digital rulebook better accounts for these risks \u2013 do not stand up to scrutiny and fail to justify the changes. A wide range of organisations have <a href=\"https:\/\/www.adalovelaceinstitute.org\/news\/gpai-code-of-practice\/\">critically<\/a> <a href=\"https:\/\/huggingface.co\/blog\/frimelle\/eu-third-cop-draft\">reacted<\/a> to the changes in the systemic risk taxonomy, while also acknowledging some positives, such as that the draft\u2019s provisions concerning external assessment were strengthened, and it now requires <a href=\"https:\/\/huggingface.co\/blog\/frimelle\/eu-third-cop-draft\">greater consideration of acceptability of risks<\/a> by model providers.<\/p>\n\n\n\n<p>This draft Code of Practice will undergo one final round of review before the final version is presented and published by 2 May. The AI Office and the AI Board will subsequently review the draft and publish their assessment, but the decision to go forward with the Code ultimately rests on the European Commission. The EC can choose to either approve the Code through an implementing Act, or \u2013 if the code is not finalised or simply deemed inadequate \u2013 to provide common rules for how GPAI model providers should follow their obligations by 2 August, the same date those obligations become applicable. Independent of this process, the European Commission can request standardisation of the rules for GPAI models. Once those standards are finalised, covered providers of GPAI models will be presumed to comply with their obligations under the AI Act.&nbsp;<\/p>\n\n\n\n<p><strong>Spain Takes a Robust Approach to Prohibited AI Practices<\/strong><\/p>\n\n\n\n<p>The Spanish government approved a <a href=\"https:\/\/avance.digital.gob.es\/_layouts\/15\/HttpHandlerParticipacionPublicaAnexos.ashx?k=19128\">bill implementing the AI Act<\/a> at the national level, marking the first step towards its formal adoption. Notably, the bill sets out narrow conditions under which remote biometric identification (RBI) may be lawfully used for law enforcement purposes. The practice is in principle prohibited by the AI Act, but technically allowed for three law enforcement purposes \u2013 search of missing persons and victims of specified crimes, prevention of threats or terrorist attacks, and identification of suspects of specified criminal offences. It can only be lawfully carried out in a member state where it is explicitly authorised by implementing national legislation, which can be stricter \u2013 but not broader \u2013 than the terms set by the Act. The Spanish bill as written only authorises RBI use for one of the three purposes the AI Act lists, namely to locate and identify individuals suspected of committing criminal offences of a given degree of seriousness, as specified in Annex II of the Act.<\/p>\n\n\n\n<p>The bill builds on the AI Act by categorising infringements in three categories: minor, severe, and very severe. Any use of an AI practice the law prohibits, including RBI use outside of the draft law\u2019s only exception, is deemed very severe. Failure to notify users when they directly interact with an AI system, or to <a href=\"https:\/\/www.reuters.com\/technology\/artificial-intelligence\/spain-impose-massive-fines-not-labelling-ai-generated-content-2025-03-11\/\">label AI-generated content<\/a> in line with the AI Act\u2019s requirements, will constitute a \u201csevere\u201d infringement.<\/p>\n\n\n\n<p><strong>Italian Draft Law Aspires to Set Limits on AI Uses in Critical Sectors&nbsp;<\/strong><\/p>\n\n\n\n<p>An <a href=\"https:\/\/www.senato.it\/service\/PDF\/PDFServer\/BGT\/01418921.pdf\">Italian government law decree<\/a> approved by the Senate sets general conditions for the use of AI, delineating and limiting the uses of AI in critical sectors.&nbsp;<\/p>\n\n\n\n<p>The law specifies that minors below 14 years old may only access AI systems with parental consent. It identifies key areas that stand to benefit from AI \u2013 such as the healthcare sector and the working environment \u2013 and also emphasises key safeguards, such as creating an AI Observatory within the Ministry of Labor, and limiting uses of AI systems in the judicial sector to administrative purposes specifically excluding legal research.&nbsp;<\/p>\n\n\n\n<p>Further, the law amends aspects of Italian criminal law to cover the use of AI in committing criminal offenses. The law notably introduces a new legal offense for dissemination of AI-generated content \u2013 ostensibly including deepfakes \u2013 without a person\u2019s consent, resulting in unjust damage, that is punishable by imprisonment from one to five years.<\/p>\n\n\n\n<p>The law is in draft form and will need to be approved by the Italian Chamber of Deputies.<\/p>\n\n\n\n<p><strong>AI Identified as a Key Priority in Europe\u2019s Defence Strategy&nbsp;<\/strong><\/p>\n\n\n\n<p>A <a href=\"https:\/\/defence-industry-space.ec.europa.eu\/document\/download\/30b50d2c-49aa-4250-9ca6-27a0347cf009_en?filename=White%20Paper.pdf\">joint white paper<\/a> released last week by the European Commission and the High Representative for Foreign Affairs and Security Policy on AI in European Defence identified AI as a key area of priority defense capability, noting that new ecosystems and value chains for cutting-edge technologies such as AI \u201ccan feed into civilian and military applications\u201d. The paper highlights AI-powered robots as a concrete area of opportunity.&nbsp;<\/p>\n\n\n\n<p>The white paper announces a strategic dialogue with the defence industry to identify regulatory hurdles and address challenges ahead of presenting a dedicated Defence Omnibus Simplification proposal by June 2025. This new simplification proposal adds to the five recently announced simplification initiatives \u2014 reviews of legislation from the <a href=\"https:\/\/cdt.org\/insights\/cdt-europes-ai-bulletin-february-2025\/\">digital<\/a>, agricultural, and other domains \u2014 outlined in the Commission\u2019s <a href=\"https:\/\/commission.europa.eu\/document\/download\/8556fc33-48a3-4a96-94e8-8ecacef1ea18_en?filename=250201_Simplification_Communication_en.pdf\">communication on simplification<\/a>.<\/p>\n\n\n\n<p>The paper further announces a forthcoming European Armament Technological Roadmap to be published this year \u2014 \u201cleveraging investment into dual use advanced technological capabilities at EU, national and private level\u201d \u2014 that will focus on AI and quantum in an initial phase.<\/p>\n\n\n\n<p><strong>In Other \u2018AI &amp; EU\u2019 News<\/strong>&nbsp;<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Digital rights NGO noyb <a href=\"https:\/\/noyb.eu\/en\/ai-hallucinations-chatgpt-created-fake-child-murderer\">filed a second complaint against OpenAI<\/a> after a Norwegian user queried ChatGPT for information related to his name, and the chatbot inaccurately responded that the individual by that name was a convicted murderer. The complaint, filed with the Norwegian data protection authority Datatilsynet, argues that OpenAI violates GDPR\u2019s data accuracy principle by allowing ChatGPT to create defamatory outputs about users.<\/li>\n<\/ul>\n\n\n\n<ul class=\"wp-block-list\">\n<li>A proposed amendment to the Hungarian Child Protection Act seeks to <a href=\"https:\/\/edition.cnn.com\/2025\/03\/17\/europe\/orban-anti-lgbtq-bill-budapest-pride-intl-latam\/index.html\">allow using facial recognition<\/a> to identify Pride protest attendees, and to ban Pride events. The proposal would likely be precluded by the AI Act\u2019s prohibition on conducting remote biometric identification for law enforcement purposes in publicly accessible spaces, which became applicable in February this year.&nbsp;&nbsp;<\/li>\n<\/ul>\n\n\n\n<ul class=\"wp-block-list\">\n<li>The European Commission is <a href=\"https:\/\/openreview.net\/group?id=digital-strategy.ec\/AIO\/2025\/Workshop&amp;referrer=%5BHomepage%5D(%2F)#tab-your-consoles\">building a network of model evaluators<\/a> to define how general-purpose AI models with systemic risk should be evaluated in accordance with the legal requirements of the AI Act and the GPAI code of practice.&nbsp;<\/li>\n<\/ul>\n\n\n\n<p><strong>Content of the Month<\/strong><strong> <\/strong><strong>\ud83d\udcda\ud83d\udcfa\ud83c\udfa7<\/strong><\/p>\n\n\n\n<p><em>CDT Europe presents our freshly curated recommended reads for the month. For more on AI, take a look at <\/em><a href=\"https:\/\/cdt.org\/insights\/?keyword=artificial+intelligence&amp;area-of-focus%5B%5D=ai-machine-learning#results\"><em>CDT\u2019s <\/em><\/a><em>work.&nbsp;&nbsp;<\/em><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>CSIS, <a href=\"https:\/\/www.csis.org\/analysis\/future-transatlantic-digital-collaboration-eu-commissioner-michael-mcgrath\">The Future of Transatlantic Digital Collaboration with EU Commissioner Michael McGrath<\/a> (transcript of event)<\/li>\n\n\n\n<li>CIVIO, <a href=\"https:\/\/civio.es\/justicia\/2025\/03\/12\/spanish-prisons-use-a-30-year-old-algorithm-to-decide-on-temporary-releases\/\">Spanish prisons use a 30-year-old algorithm to decide on temporary releases<\/a><\/li>\n\n\n\n<li>EUACT, <a href=\"https:\/\/jtc21.eu\/euact-ai-project-call-for-experts\/\">AI Project: Call for Experts<\/a><\/li>\n\n\n\n<li>The Atlantic, <a href=\"https:\/\/www.theatlantic.com\/technology\/archive\/2025\/03\/libgen-meta-openai\/682093\/\">The Unbelievable Scale of AI\u2019s Pirated-Books Problem<\/a><\/li>\n\n\n\n<li>Tech Policy Press, <a href=\"https:\/\/www.techpolicy.press\/the-eu-ai-policy-pivot-adaptation-or-capitulation\/\">The EU AI Policy Pivot: Adaptation or Capitulation?<\/a><\/li>\n<\/ul>\n","protected":false},"featured_media":108076,"template":"","content_type":[],"area-of-focus":[834,652],"class_list":["post-108071","insight","type-insight","status-publish","has-post-thumbnail","hentry","area-of-focus-ai-policy-governance","area-of-focus-european-policy"],"acf":[],"_links":{"self":[{"href":"https:\/\/cdt.org\/wp-json\/wp\/v2\/insight\/108071","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/cdt.org\/wp-json\/wp\/v2\/insight"}],"about":[{"href":"https:\/\/cdt.org\/wp-json\/wp\/v2\/types\/insight"}],"version-history":[{"count":2,"href":"https:\/\/cdt.org\/wp-json\/wp\/v2\/insight\/108071\/revisions"}],"predecessor-version":[{"id":108075,"href":"https:\/\/cdt.org\/wp-json\/wp\/v2\/insight\/108071\/revisions\/108075"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/cdt.org\/wp-json\/wp\/v2\/media\/108076"}],"wp:attachment":[{"href":"https:\/\/cdt.org\/wp-json\/wp\/v2\/media?parent=108071"}],"wp:term":[{"taxonomy":"content_type","embeddable":true,"href":"https:\/\/cdt.org\/wp-json\/wp\/v2\/content_type?post=108071"},{"taxonomy":"area-of-focus","embeddable":true,"href":"https:\/\/cdt.org\/wp-json\/wp\/v2\/area-of-focus?post=108071"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}