{"id":105037,"date":"2024-07-30T00:01:00","date_gmt":"2024-07-30T04:01:00","guid":{"rendered":"https:\/\/cdt.org\/?post_type=insight&#038;p=105037"},"modified":"2024-07-29T13:29:13","modified_gmt":"2024-07-29T17:29:13","slug":"brief-election-integrity-recommendations-for-generative-ai-developers","status":"publish","type":"insight","link":"https:\/\/cdt.org\/insights\/brief-election-integrity-recommendations-for-generative-ai-developers\/","title":{"rendered":"Brief \u2013\u00a0Election Integrity Recommendations for Generative AI Developers"},"content":{"rendered":"\n<figure class=\"wp-block-image size-large\"><a href=\"https:\/\/cdt.org\/wp-content\/uploads\/2024\/07\/2024-07-25-CDT-Elections-Election-Integrity-Recommendations-for-Generative-AI-Developers-brief-final.pdf\" target=\"_blank\" rel=\"noreferrer noopener\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"536\" src=\"https:\/\/cdt.org\/wp-content\/uploads\/2024\/07\/CDT-Brief-\u2013-Election-Integrity-Recommendations-for-Generative-AI-Developers-1024x536.png\" alt=\"CDT Brief, entitled\u00a0&quot;Election Integrity Recommendations for Generative AI Developers.&quot; White and blue document on a grey background.\" class=\"wp-image-105085\" srcset=\"https:\/\/cdt.org\/wp-content\/uploads\/2024\/07\/CDT-Brief-\u2013-Election-Integrity-Recommendations-for-Generative-AI-Developers-1024x536.png 1024w, https:\/\/cdt.org\/wp-content\/uploads\/2024\/07\/CDT-Brief-\u2013-Election-Integrity-Recommendations-for-Generative-AI-Developers-640x335.png 640w, https:\/\/cdt.org\/wp-content\/uploads\/2024\/07\/CDT-Brief-\u2013-Election-Integrity-Recommendations-for-Generative-AI-Developers-768x402.png 768w, https:\/\/cdt.org\/wp-content\/uploads\/2024\/07\/CDT-Brief-\u2013-Election-Integrity-Recommendations-for-Generative-AI-Developers-1536x804.png 1536w, https:\/\/cdt.org\/wp-content\/uploads\/2024\/07\/CDT-Brief-\u2013-Election-Integrity-Recommendations-for-Generative-AI-Developers-2048x1071.png 2048w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/a><figcaption class=\"wp-element-caption\">CDT Brief, entitled&nbsp;&#8220;Election Integrity Recommendations for Generative AI Developers.&#8221; White and blue document on a grey background.<\/figcaption><\/figure>\n\n\n\n<p>   <\/p>\n\n\n\n<p>With over <a href=\"https:\/\/perma.cc\/H96M-WMH3\" target=\"_blank\" rel=\"noreferrer noopener\">80 countries<\/a> and more than <a href=\"https:\/\/perma.cc\/NP7G-AQR8\" target=\"_blank\" rel=\"noreferrer noopener\">half of the world\u2019s population<\/a> going to the polls this year, 2024 represents the largest single year of global elections since the advent of the internet. It has also been dubbed the \u2018<a href=\"https:\/\/perma.cc\/BYU3-SWEN\" target=\"_blank\" rel=\"noreferrer noopener\">First AI Election<\/a>\u2019, in light of the boom in widely accessible generative AI tools that have the potential to accelerate cybersecurity and information integrity challenges to global elections this year.&nbsp;<\/p>\n\n\n\n<p>Addressing the risks that generative AI poses to elections requires an ecosystem approach. In part, that requires focusing on the<em> distribution<\/em> of deceptive AI-generated election content on social networks and private messaging services, and through robocalls, TV, and radio. While identifying solutions to the distribution of this content is absolutely necessary \u2014 and CDT has supported several initiatives to create voluntary standards for technology companies that help to prevent these risks \u2014 it is also necessary to consider the policies and product interventions that generative AI developers should adopt in order to prevent harmful content from being created on or spread through their apps and services.&nbsp;<\/p>\n\n\n\n<p>Although we are halfway through this election year, it remains imperative for AI developers to quickly develop election integrity programs employing a variety of levers including policy, product, and enforcement to protect democratic elections this year and beyond.<\/p>\n\n\n\n<p><strong>Summary of Recommendations<\/strong><\/p>\n\n\n\n<p><strong><em>Usage Policies<\/em><\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Prohibit the generation of realistic images, videos and audio depicting political figures or political and electoral events.<\/li>\n\n\n\n<li>Prohibit users from conducting political campaign activities or demographic targeting \u2014 at least in the short term \u2014 and develop transparent goals for longer-term ethical development of political uses of AI.&nbsp;&nbsp;<\/li>\n\n\n\n<li>Prohibit the use of generative AI ad tools for political advertisements.<\/li>\n\n\n\n<li>Prohibit any conduct that interferes with elections, including actions that prevent someone from voting; mislead someone into voting differently or not voting at all; or incite, support, or encourage violence against election processes or workers<strong>.<\/strong><\/li>\n\n\n\n<li>Refrain from using stored memory or other methods of personalization in generating responses to electoral and political queries.<\/li>\n\n\n\n<li>Refrain from releasing text-to-speech cloning tools that allow users to replicate the natural voice of real people, including political figures.&nbsp;<\/li>\n<\/ul>\n\n\n\n<p><strong><em>Product Interventions<\/em><\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Develop user interface pop-ups or labels relating to known narratives of election mis- and disinformation.<\/li>\n\n\n\n<li>Disclose how recently your chatbot\u2019s training data was updated when providing responses to time-sensitive election queries.<\/li>\n\n\n\n<li>Promote, and direct users to, authoritative sources of election-related information.<\/li>\n\n\n\n<li>Allow users to report policy-violating answers in chatbots and policy-violating apps built using an API.<\/li>\n\n\n\n<li>Include an appeals option for enforcement actions.<\/li>\n\n\n\n<li>Commit to develop and embed machine-readable watermarks and metadata into image, video, and audio content using a common standard that social platforms can detect.&nbsp;<\/li>\n<\/ul>\n\n\n\n<p><strong><em>Enforcement<\/em><\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Proactively enforce usage policies on elections at all times, not just during active election periods.&nbsp;<\/li>\n\n\n\n<li>Consistently deploy product interventions for the most common election lies, and create protocols to quickly deploy product interventions to address newly emerging, election-specific mis- and disinformation.<\/li>\n\n\n\n<li>Proactively test model answers to common election queries.<\/li>\n\n\n\n<li>Create escalation channels to accelerate leadership\u2019s visibility into emerging issues, particularly during high risk election periods.<\/li>\n\n\n\n<li>Adequately resource and staff policy and enforcement teams.&nbsp;<\/li>\n\n\n\n<li>Institute actor-level enforcement for election integrity policy violations.<\/li>\n<\/ul>\n\n\n\n<p><strong><em>Transparency<\/em><\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Be transparent about election policies.<\/li>\n\n\n\n<li>Publish regular transparency reports on election mis- and disinformation and deceptive AI usage.<\/li>\n\n\n\n<li>Consult with civil society and facilitate researcher access to usage data.<\/li>\n\n\n\n<li>Develop relationships and communication channels with election administrators.<\/li>\n<\/ul>\n\n\n\n<p><strong><em><a href=\"https:\/\/cdt.org\/wp-content\/uploads\/2024\/07\/2024-07-25-CDT-Elections-Election-Integrity-Recommendations-for-Generative-AI-Developers-brief-final.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">Read the full report.<\/a><\/em><\/strong><\/p>\n","protected":false},"featured_media":105085,"template":"","content_type":[521],"area-of-focus":[834,10211,849],"class_list":["post-105037","insight","type-insight","status-publish","has-post-thumbnail","hentry","content_type-report","area-of-focus-ai-policy-governance","area-of-focus-election-disinformation","area-of-focus-elections-democracy"],"acf":[],"_links":{"self":[{"href":"https:\/\/cdt.org\/wp-json\/wp\/v2\/insight\/105037","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/cdt.org\/wp-json\/wp\/v2\/insight"}],"about":[{"href":"https:\/\/cdt.org\/wp-json\/wp\/v2\/types\/insight"}],"version-history":[{"count":5,"href":"https:\/\/cdt.org\/wp-json\/wp\/v2\/insight\/105037\/revisions"}],"predecessor-version":[{"id":105123,"href":"https:\/\/cdt.org\/wp-json\/wp\/v2\/insight\/105037\/revisions\/105123"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/cdt.org\/wp-json\/wp\/v2\/media\/105085"}],"wp:attachment":[{"href":"https:\/\/cdt.org\/wp-json\/wp\/v2\/media?parent=105037"}],"wp:term":[{"taxonomy":"content_type","embeddable":true,"href":"https:\/\/cdt.org\/wp-json\/wp\/v2\/content_type?post=105037"},{"taxonomy":"area-of-focus","embeddable":true,"href":"https:\/\/cdt.org\/wp-json\/wp\/v2\/area-of-focus?post=105037"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}