{"version":"1.0","provider_name":"Center for Democracy and Technology","provider_url":"https:\/\/cdt.org","author_name":"Kendal Aldrige","author_url":"https:\/\/cdt.org\/author\/kaldridgecdt-org\/","title":"Op-Ed - Artificial Sweeteners: The Dangers of Sycophantic AI","type":"rich","width":600,"height":338,"html":"<blockquote class=\"wp-embedded-content\" data-secret=\"aO5VK1gwNX\"><a href=\"https:\/\/cdt.org\/insights\/op-ed-artificial-sweeteners-the-dangers-of-sycophantic-ai\/\">Op-Ed &#8211; Artificial Sweeteners: The Dangers of Sycophantic AI<\/a><\/blockquote><iframe sandbox=\"allow-scripts\" security=\"restricted\" src=\"https:\/\/cdt.org\/insights\/op-ed-artificial-sweeteners-the-dangers-of-sycophantic-ai\/embed\/#?secret=aO5VK1gwNX\" width=\"600\" height=\"338\" title=\"&#8220;Op-Ed &#8211; Artificial Sweeteners: The Dangers of Sycophantic AI&#8221; &#8212; Center for Democracy and Technology\" data-secret=\"aO5VK1gwNX\" frameborder=\"0\" marginwidth=\"0\" marginheight=\"0\" scrolling=\"no\" class=\"wp-embedded-content\"><\/iframe><script type=\"text\/javascript\">\n\/* <![CDATA[ *\/\n\/*! This file is auto-generated *\/\n!function(d,l){\"use strict\";l.querySelector&&d.addEventListener&&\"undefined\"!=typeof URL&&(d.wp=d.wp||{},d.wp.receiveEmbedMessage||(d.wp.receiveEmbedMessage=function(e){var t=e.data;if((t||t.secret||t.message||t.value)&&!\/[^a-zA-Z0-9]\/.test(t.secret)){for(var s,r,n,a=l.querySelectorAll('iframe[data-secret=\"'+t.secret+'\"]'),o=l.querySelectorAll('blockquote[data-secret=\"'+t.secret+'\"]'),c=new RegExp(\"^https?:$\",\"i\"),i=0;i<o.length;i++)o[i].style.display=\"none\";for(i=0;i<a.length;i++)s=a[i],e.source===s.contentWindow&&(s.removeAttribute(\"style\"),\"height\"===t.message?(1e3<(r=parseInt(t.value,10))?r=1e3:~~r<200&&(r=200),s.height=r):\"link\"===t.message&&(r=new URL(s.getAttribute(\"src\")),n=new URL(t.value),c.test(n.protocol))&&n.host===r.host&&l.activeElement===s&&(d.top.location.href=t.value))}},d.addEventListener(\"message\",d.wp.receiveEmbedMessage,!1),l.addEventListener(\"DOMContentLoaded\",function(){for(var e,t,s=l.querySelectorAll(\"iframe.wp-embedded-content\"),r=0;r<s.length;r++)(t=(e=s[r]).getAttribute(\"data-secret\"))||(t=Math.random().toString(36).substring(2,12),e.src+=\"#?secret=\"+t,e.setAttribute(\"data-secret\",t)),e.contentWindow.postMessage({message:\"ready\",secret:t},\"*\")},!1)))}(window,document);\n\/* ]]> *\/\n<\/script>\n","thumbnail_url":"https:\/\/cdt.org\/wp-content\/uploads\/2020\/03\/2020-03-10-CDT-default-image-WHITE.png","thumbnail_width":2500,"thumbnail_height":1397,"description":"This op-ed \u2013 authored by CDT\u2019s\u00a0Amy Winecoff\u00a0 \u2013 first appeared in Tech Policy Press on May 14, 2025.\u00a0A portion of the text has been pasted below. At the end of April, OpenAI released a model update that made ChatGPT feel less like a helpful assistant and more like a yes-man. The update was quickly&nbsp;rolled back, [&hellip;]"}