{"id":108437,"date":"2025-04-22T09:40:41","date_gmt":"2025-04-22T13:40:41","guid":{"rendered":"https:\/\/cdt.org\/?post_type=insight&#038;p=108437"},"modified":"2025-04-22T09:54:52","modified_gmt":"2025-04-22T13:54:52","slug":"op-ed-before-ai-agents-act-we-need-answers","status":"publish","type":"insight","link":"https:\/\/cdt.org\/insights\/op-ed-before-ai-agents-act-we-need-answers\/","title":{"rendered":"Op-ed: Before AI Agents Act, We Need Answers"},"content":{"rendered":"\n<p>CDT Ruchika Joshi penned a new op-ed that first appeared in Tech Policy Press on April 17, 2025.<\/p>\n\n\n\n<p>Read an excerpt: <\/p>\n\n\n\n<p><em>Tech companies are betting big on AI agents. From sweeping&nbsp;<a href=\"https:\/\/www.forbes.com\/sites\/moorinsights\/2024\/09\/30\/agentforce-from-salesforce-impacts-on-enterprise-data-erp-and-scm\/\" target=\"_blank\" rel=\"noreferrer noopener\">organizational overhauls<\/a>&nbsp;to CEOs claiming agents will \u2018<a href=\"https:\/\/blog.samaltman.com\/reflections\" target=\"_blank\" rel=\"noreferrer noopener\">join the workforce<\/a>\u2019 and power a&nbsp;<a href=\"https:\/\/fortune.com\/2025\/01\/07\/nvidias-jensen-huang-says-ai-agents-are-a-multi-trillion-dollar-opportunity\/\" target=\"_blank\" rel=\"noreferrer noopener\">multi-trillion-dollar industry<\/a>, the&nbsp;<a href=\"https:\/\/www.wsj.com\/articles\/ai-agents-are-a-moment-of-truth-for-tech-8ac5365a\" target=\"_blank\" rel=\"noreferrer noopener\">race to match hype<\/a>&nbsp;is on.<\/em><\/p>\n\n\n\n<p><em>While the boundaries of what qualifies as an \u2018AI agent\u2019 remain fuzzy, the term is commonly used to describe AI systems designed to plan and execute tasks on behalf of users with increasing autonomy. Unlike AI-powered systems like chatbots or recommendation engines, which can generate responses or make suggestions to assist users in making decisions, AI agents are envisioned to execute those decisions by directly interacting with external websites or tools via APIs.<\/em><\/p>\n\n\n\n<p><em>Where an AI chatbot might have previously suggested flight routes to a given destination, AI agents are now being designed to find which flight is cheapest, book the ticket, fill out the user\u2019s passport information, and email the boarding pass. Building on that idea, early demonstrations of agent use include&nbsp;<a href=\"https:\/\/www.theguardian.com\/technology\/2025\/mar\/09\/who-bought-this-smoked-salmon-how-ai-agents-will-change-the-internet-and-shopping-lists\" target=\"_blank\" rel=\"noreferrer noopener\">operating a computer for grocery shopping<\/a>,&nbsp;<a href=\"https:\/\/www.theregister.com\/2025\/03\/12\/servicenow_yokohama\/\" target=\"_blank\" rel=\"noreferrer noopener\">automating HR approvals<\/a>, or&nbsp;<a href=\"https:\/\/siliconangle.com\/2025\/03\/11\/ai-agent-powered-compliance-automation-startup-norm-ai-raises-48m\/\" target=\"_blank\" rel=\"noreferrer noopener\">managing legal compliance tasks<\/a>.<\/em><\/p>\n\n\n\n<p><em>Yet current AI agents have&nbsp;<a href=\"https:\/\/www.washingtonpost.com\/technology\/2025\/02\/07\/openai-operator-ai-agent-chatgpt\/\" target=\"_blank\" rel=\"noreferrer noopener\">been quick to break<\/a>, indicating that reliable task execution remains an elusive goal. This is unsurprising, since AI agents rely on the same foundation models as non-agentic AI and so are prone to familiar challenges of bias, hallucination, brittle reasoning, and limited real-world grounding. Non-agentic AI systems have already been shown&nbsp;<a href=\"https:\/\/www.forbes.com\/sites\/marisagarcia\/2024\/02\/19\/what-air-canada-lost-in-remarkable-lying-ai-chatbot-case\/\" target=\"_blank\" rel=\"noreferrer noopener\">to make expensive mistakes<\/a>,&nbsp;<a href=\"https:\/\/hai.stanford.edu\/news\/covert-racism-ai-how-language-models-are-reinforcing-outdated-stereotypes\" target=\"_blank\" rel=\"noreferrer noopener\">exhibit biased decision making<\/a>, and&nbsp;<a href=\"https:\/\/www.anthropic.com\/research\/reasoning-models-dont-say-think\" target=\"_blank\" rel=\"noreferrer noopener\">mislead users about their \u2018thinking\u2019<\/a>. Enabling such systems to now&nbsp;act&nbsp;on behalf of users will only raise the stakes of these failures.<\/em><\/p>\n\n\n\n<p><em>As companies race to build and deploy AI agents to act with less supervision than earlier systems, what is keeping these agents from harming people?<\/em><\/p>\n\n\n\n<p><em>The unsettling answer is that no one really knows, and the documentation that the agent developers provide doesn\u2019t add much clarity. For example, while system or model cards released by&nbsp;<a href=\"https:\/\/openai.com\/index\/operator-system-card\/\" target=\"_blank\" rel=\"noreferrer noopener\">OpenAI<\/a>&nbsp;and&nbsp;<a href=\"https:\/\/assets.anthropic.com\/m\/61e7d27f8c8f5919\/original\/Claude-3-Model-Card.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">Anthropic<\/a>&nbsp;offer some details on agent capabilities and safety testing, they also include vague assurances on risk mitigation efforts without providing supporting evidence. Others&nbsp;<a href=\"https:\/\/x.ai\/news\/grok-3\" target=\"_blank\" rel=\"noreferrer noopener\">have released no documentation at all<\/a>&nbsp;or only done so after&nbsp;<a href=\"https:\/\/deepmind.google\/technologies\/project-mariner\/\" target=\"_blank\" rel=\"noreferrer noopener\">considerable delay<\/a>.<\/em><\/p>\n\n\n\n<p>Read the<a href=\"https:\/\/www.techpolicy.press\/before-ai-agents-act-we-need-answers\/\"> full op-ed<\/a>. <\/p>\n","protected":false},"featured_media":86100,"template":"","content_type":[],"area-of-focus":[834,10216],"class_list":["post-108437","insight","type-insight","status-publish","has-post-thumbnail","hentry","area-of-focus-ai-policy-governance","area-of-focus-cdt-ai-governance-lab"],"acf":[],"_links":{"self":[{"href":"https:\/\/cdt.org\/wp-json\/wp\/v2\/insight\/108437","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/cdt.org\/wp-json\/wp\/v2\/insight"}],"about":[{"href":"https:\/\/cdt.org\/wp-json\/wp\/v2\/types\/insight"}],"version-history":[{"count":4,"href":"https:\/\/cdt.org\/wp-json\/wp\/v2\/insight\/108437\/revisions"}],"predecessor-version":[{"id":108442,"href":"https:\/\/cdt.org\/wp-json\/wp\/v2\/insight\/108437\/revisions\/108442"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/cdt.org\/wp-json\/wp\/v2\/media\/86100"}],"wp:attachment":[{"href":"https:\/\/cdt.org\/wp-json\/wp\/v2\/media?parent=108437"}],"wp:term":[{"taxonomy":"content_type","embeddable":true,"href":"https:\/\/cdt.org\/wp-json\/wp\/v2\/content_type?post=108437"},{"taxonomy":"area-of-focus","embeddable":true,"href":"https:\/\/cdt.org\/wp-json\/wp\/v2\/area-of-focus?post=108437"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}