{"id":108816,"date":"2025-05-14T11:26:42","date_gmt":"2025-05-14T15:26:42","guid":{"rendered":"https:\/\/cdt.org\/?post_type=insight&#038;p=108816"},"modified":"2025-05-14T11:26:43","modified_gmt":"2025-05-14T15:26:43","slug":"ai-agents-in-focus-technical-and-policy-considerations","status":"publish","type":"insight","link":"https:\/\/cdt.org\/insights\/ai-agents-in-focus-technical-and-policy-considerations\/","title":{"rendered":"AI Agents In Focus: Technical and Policy Considerations"},"content":{"rendered":"\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"536\" src=\"https:\/\/cdt.org\/wp-content\/uploads\/2025\/05\/AI-Agents-In-Focus-Technical-and-Policy-Considerations-1024x536.png\" alt=\"AI Agents In Focus: Technical and Policy Considerations. White and black document on a grey background.\" class=\"wp-image-108832\" srcset=\"https:\/\/cdt.org\/wp-content\/uploads\/2025\/05\/AI-Agents-In-Focus-Technical-and-Policy-Considerations-1024x536.png 1024w, https:\/\/cdt.org\/wp-content\/uploads\/2025\/05\/AI-Agents-In-Focus-Technical-and-Policy-Considerations-640x335.png 640w, https:\/\/cdt.org\/wp-content\/uploads\/2025\/05\/AI-Agents-In-Focus-Technical-and-Policy-Considerations-768x402.png 768w, https:\/\/cdt.org\/wp-content\/uploads\/2025\/05\/AI-Agents-In-Focus-Technical-and-Policy-Considerations-1536x804.png 1536w, https:\/\/cdt.org\/wp-content\/uploads\/2025\/05\/AI-Agents-In-Focus-Technical-and-Policy-Considerations-2048x1072.png 2048w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><figcaption class=\"wp-element-caption\"><em>Brief entitled, &#8220;AI Agents In Focus: Technical and Policy Considerations.&#8221; White and black document on a grey background.<\/em><\/figcaption><\/figure>\n\n\n\n<p>   <\/p>\n\n\n\n<p>AI agents are moving rapidly from prototypes to real-world products. These systems are increasingly embedded into consumer tools, enterprise workflows, and developer platforms. Yet despite their growing visibility, the term \u201cAI agent\u201d lacks a clear definition and is used to describe a wide spectrum of systems \u2014 from conversational assistants to action-oriented tools capable of executing complex tasks. This brief focuses on a narrower and increasingly relevant subset: action-taking AI agents, which pursue goals by making decisions and interacting with digital environments or tools, often with limited human oversight.&nbsp;<\/p>\n\n\n\n<p>As an emerging class of AI systems, action-taking agents indicate a distinct shift from earlier generations of generative AI. Unlike passive assistants that respond to user prompts, these systems can initiate tasks, revise plans based on new information, and operate across applications or time horizons. They typically combine large language models (LLMs) with structured workflows and tool access, enabling them to navigate interfaces, retrieve and input data, and coordinate tasks across systems, in addition to often offering conversational interfaces. In more advanced settings, they operate in orchestration frameworks where multiple agents collaborate, each with distinct roles or domain expertise.<\/p>\n\n\n\n<p>This brief begins by outlining how action-taking agents function, the technical components that enable them, and the kinds of agentic products being built. It then explains how technical components of AI agents \u2014 such as control loop complexity, tool access, and scaffolding architecture \u2014 shape their behavior in practice. Finally, it surfaces emerging areas of policy concern where the risks posed by agents increasingly appear to outpace the safeguards currently in place, including security, privacy, control, human-likeness, governance infrastructure, and allocation of responsibility. Together, these sections aim to clarify both how AI agents currently work and what is needed to ensure they are responsibly developed and deployed.<\/p>\n\n\n\n<p><strong><em><a href=\"https:\/\/cdt.org\/wp-content\/uploads\/2025\/05\/2025-05-14-AI-Gov-Lab-AI-Agents-In-Focus-brief-final.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">Read the full brief.<\/a><\/em><\/strong><\/p>\n","protected":false},"featured_media":108832,"template":"","content_type":[521],"area-of-focus":[834,10216],"class_list":["post-108816","insight","type-insight","status-publish","has-post-thumbnail","hentry","content_type-report","area-of-focus-ai-policy-governance","area-of-focus-cdt-ai-governance-lab"],"acf":[],"_links":{"self":[{"href":"https:\/\/cdt.org\/wp-json\/wp\/v2\/insight\/108816","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/cdt.org\/wp-json\/wp\/v2\/insight"}],"about":[{"href":"https:\/\/cdt.org\/wp-json\/wp\/v2\/types\/insight"}],"version-history":[{"count":3,"href":"https:\/\/cdt.org\/wp-json\/wp\/v2\/insight\/108816\/revisions"}],"predecessor-version":[{"id":108833,"href":"https:\/\/cdt.org\/wp-json\/wp\/v2\/insight\/108816\/revisions\/108833"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/cdt.org\/wp-json\/wp\/v2\/media\/108832"}],"wp:attachment":[{"href":"https:\/\/cdt.org\/wp-json\/wp\/v2\/media?parent=108816"}],"wp:term":[{"taxonomy":"content_type","embeddable":true,"href":"https:\/\/cdt.org\/wp-json\/wp\/v2\/content_type?post=108816"},{"taxonomy":"area-of-focus","embeddable":true,"href":"https:\/\/cdt.org\/wp-json\/wp\/v2\/area-of-focus?post=108816"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}