{"id":100181,"date":"2023-10-02T09:29:42","date_gmt":"2023-10-02T13:29:42","guid":{"rendered":"https:\/\/cdt.org\/?post_type=insight&#038;p=100181"},"modified":"2023-10-02T13:03:40","modified_gmt":"2023-10-02T17:03:40","slug":"eus-ai-act-cdt-europe-responds-to-the-european-commissions-proposal-on-high-risk-classification","status":"publish","type":"insight","link":"https:\/\/cdt.org\/insights\/eus-ai-act-cdt-europe-responds-to-the-european-commissions-proposal-on-high-risk-classification\/","title":{"rendered":"EU\u2019s AI Act: CDT Europe responds to the European Commission\u2019s Proposal on High-Risk Classification"},"content":{"rendered":"\n<p><strong><em>Also by Rachele Ceraulo, <a href=\"https:\/\/cdt.org\/eu\/\">CDT Europe<\/a> Advocacy Intern<\/em> <\/strong><\/p>\n\n\n\n<p>We are approaching the final weeks of negotiations on the EU\u2019s AI Act. Last week, the European Commission\u2019s compromise text on the classification of high-risk AI systems was leaked to the media. The Commission&#8217;s decision to table this compromise is telling, as it indicates that the question of high-risk classification is a point of intense negotiations between the parties. It is concerning that the provisions of the AI Act linked to the uses of AI that would pose the highest risk to human rights are in serious danger of being watered down. This is coming on top of proposed Article 6, which would allow for a self-assessment-based carve-out from high-risk classification. The combination of Article 6 and this Commission proposal could totally undermine the vital protections offered by the high-risk classification on certain uses of AI.&nbsp;&nbsp;<\/p>\n\n\n\n<p><strong>The Compromise Text<\/strong><\/p>\n\n\n\n<p>The compromise text introduces three criteria that would exempt certain AI systems used in high-risk contexts from complying with their obligations under the high-risk regime. According to the compromise, entities would be exempt from compliance, under the following circumstances:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Executing low complexity tasks;<\/li>\n\n\n\n<li>Confirming or improving tasks that are accessory to human assessments; or<\/li>\n\n\n\n<li>Performing preparatory tasks to an assessment.<\/li>\n<\/ul>\n\n\n\n<p>If the AI system meets one of these three exemptions, it would be deemed to not pose a significant risk of harm, and thus would not qualify as high-risk under the Act.&nbsp;<\/p>\n\n\n\n<p>The Commission provided some practical examples for these derogations. For the first two exemptions, it offered the example of \u201c<strong>AI system for recruitment or selection of natural persons, notably for [&#8230;] screening or filtering applications\u201d<\/strong> and <strong>\u201cAI system intended to be used for recruitment or selection of natural persons, notably for advertising vacancies.\u201d<\/strong><\/p>\n\n\n\n<p>The document also outlines a new mechanism for self-assessment under Article 6, whereby a provider must assess whether or not their AI systems are high-risk and draw up documentation supporting that determination. Providers could also be compelled, upon request, to submit these documents to the national competent authorities.&nbsp;<\/p>\n\n\n\n<p><strong>Problems with the Commission\u2019s proposed compromise text&nbsp;<\/strong><\/p>\n\n\n\n<p>The problem with Article 6 and the Commission\u2019s self-assessment proposal is that such an approach would put the burden on regulatory authorities to try to establish whether a company\u2019s assessment was accurate. Regulatory authorities would have to sift through and make sense of companies&#8217; own documentation, a process that would require massive resources that are not realistically going to be made available. Furthermore, such an approach incentivises companies to self-assess that their systems are not high-risk to avoid the further requirements that are associated with such a classification.<\/p>\n\n\n\n<p>What\u2019s more,&nbsp; providers are under no obligation to notify the supervisory authorities that they are choosing to exempt themselves from their obligations under the high-risk regime. There is also no penalty mechanism foreseen for abusive uses of this system, meaning that providers who miscategorise their systems would not be fined.<\/p>\n\n\n\n<p><strong>Criteria of High-Risk Systems<\/strong><\/p>\n\n\n\n<p>The proposed exemptions for systems that otherwise would qualify as high-risk likewise have the potential to create confusion or give rise to harm. Whether something is &#8220;a narrow procedural task of low complexity&#8221; is vague. It is true, to use the example cited by the Commission, that simply converting unstructured data (e.g., a scanned resume) into semi-structured or structured data (e.g., a searchable resume) would usually not pose a substantial risk of harm. However,&nbsp; there is&nbsp; a very blurry line between using AI to make job applications or similar materials more accessible\/understandable to human recruiters and using AI to analyse those materials in a way that influences recruiters&#8217; decisions<\/p>\n\n\n\n<p>The &#8220;accessory&#8221; and &#8220;preparatory&#8221; exceptions are also cause of serious concern. <a href=\"https:\/\/www.sciencedirect.com\/science\/article\/abs\/pii\/S0040162521004029\">Repeated studies have shown<\/a> that humans are inherently likely to defer to algorithmic recommendations, and exempting systems that are cast as merely preparatory or an adjunct to human decision-making would end up being exceptions that swallow the rule. A company could claim that recruiters have the final say, even though in practice, an algorithmic recommendation could be the deciding factor. CDT\u2019s own research and our analysis of the EU AI Act has previously made clear the high risk of discrimination that the use of AI in recruitment and hiring poses.&nbsp;<\/p>\n\n\n\n<p><strong>Recommendations<\/strong><\/p>\n\n\n\n<p>For the reasons outlined above, and in order to safeguard the core integrity of the AI Act\u2019s human rights protections, it will be crucial that Article 6 self-assessment provisions be dropped, and the categorisation criteria for high-risk remain as was initially envisaged in the European Commission\u2019s text. As previously highlighted by CDT, a risk-based approach can be helpful in ensuring proportionate regulation, but in order to appropriately protect human rights, this needs to integrate a rights-based approach. That is why, considering the above-mentioned challenges in categorising risks, mandatory <a href=\"https:\/\/ecnl.org\/sites\/default\/files\/2023-09\/AI_and_RoL_Open_Letter_final_27092023.pdf\">inclusion of fundamental rights impact assessments<\/a> (FRIAs) will be crucial to ensuring a robust rights-protective approach.<\/p>\n\n\n\n<p>Visit the <a href=\"https:\/\/cdt.org\/eu\/\">CDT Europe site<\/a>.<\/p>\n","protected":false},"featured_media":81520,"template":"","content_type":[7251],"area-of-focus":[834,652,7255,78],"class_list":["post-100181","insight","type-insight","status-publish","has-post-thumbnail","hentry","content_type-blog","area-of-focus-ai-policy-governance","area-of-focus-european-policy","area-of-focus-european-privacy-law","area-of-focus-privacy-data"],"acf":[],"_links":{"self":[{"href":"https:\/\/cdt.org\/wp-json\/wp\/v2\/insight\/100181","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/cdt.org\/wp-json\/wp\/v2\/insight"}],"about":[{"href":"https:\/\/cdt.org\/wp-json\/wp\/v2\/types\/insight"}],"version-history":[{"count":12,"href":"https:\/\/cdt.org\/wp-json\/wp\/v2\/insight\/100181\/revisions"}],"predecessor-version":[{"id":100208,"href":"https:\/\/cdt.org\/wp-json\/wp\/v2\/insight\/100181\/revisions\/100208"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/cdt.org\/wp-json\/wp\/v2\/media\/81520"}],"wp:attachment":[{"href":"https:\/\/cdt.org\/wp-json\/wp\/v2\/media?parent=100181"}],"wp:term":[{"taxonomy":"content_type","embeddable":true,"href":"https:\/\/cdt.org\/wp-json\/wp\/v2\/content_type?post=100181"},{"taxonomy":"area-of-focus","embeddable":true,"href":"https:\/\/cdt.org\/wp-json\/wp\/v2\/area-of-focus?post=100181"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}