{"id":105833,"date":"2024-09-27T13:55:54","date_gmt":"2024-09-27T17:55:54","guid":{"rendered":"https:\/\/cdt.org\/?post_type=insight&#038;p=105833"},"modified":"2024-10-10T12:15:41","modified_gmt":"2024-10-10T16:15:41","slug":"technology-as-policy-hidden-rules-and-how-to-reveal-them","status":"publish","type":"insight","link":"https:\/\/cdt.org\/insights\/technology-as-policy-hidden-rules-and-how-to-reveal-them\/","title":{"rendered":"Technology as Policy: Hidden Rules and How to Reveal Them"},"content":{"rendered":"\n<p><strong><strong> Jenny L. Davis<\/strong>,<\/strong> <a href=\"https:\/\/as.vanderbilt.edu\/sociology\/bio\/jenny-davis\/\">Professor of Sociology<\/a>, Vanderbilt University<\/p>\n\n\n\n<p><em>Disclaimer: The views expressed by CDT\u2019s Non-Resident Fellows are their own and do not necessarily reflect the policy, position, or views of CDT.<\/em><\/p>\n\n\n\n<p>I know we\u2019ve moved on and a new contest looms, but for a moment, revisit the June \u201924 Trump-Biden debate. Put aside the mistruths and political fallout. That story is stale and already overtold. Recall instead the anomaly of the night, which was a remarkable adherence to rules and procedures. When Trump and Biden debated in 2019 they<a href=\"https:\/\/www.vox.com\/2020\/10\/23\/21529607\/biden-trump-debate-won-interrupt-kristen-welker-presidential\"> interrupted each other 96 times<\/a>. In 2024, nobody interjected and everyone waited their turn. This disciplined display is the likely new standard. Yet its cause can\u2019t be traced to evolutions of conscience or political culture. Rather, credit goes to a simple audio adjustment.&nbsp;&nbsp;<\/p>\n\n\n\n<p>Fully implementing a tactic first trialed in the previous US election cycle, the candidates\u2019 microphones were only functional during their designated speaking periods. This meant just one speaker was audible at a time. In place of candidate compliance or moderator enforcement, regulation was outsourced to a mute button. The same was true when Trump and Harris took the stage.<\/p>\n\n\n\n<p>This mute-button-mandate illustrates a basic insight from scholars of technology in society, which is that technology itself <em>is <\/em>policy. Technology-as-policy refers to the way technologies, through their design, shape social behaviors and outcomes.&nbsp; Written rules and regulations codify what can, should, must, and cannot be done. Technologies do the same through their material form\u2014preventing, confining, persuading, and compelling.&nbsp;<\/p>\n\n\n\n<p><strong>The Transparency Problem<\/strong><\/p>\n\n\n\n<p>Framing technology as policy highlights the power of technical systems upon social life. In doing so, a transparency problem arises: as policy instruments, technologies are imprecise and inscrutable.&nbsp;&nbsp;<\/p>\n\n\n\n<p>Imagine a scenario in which policies were unwritten and implicit, regulating people and organizations through subtle mechanisms of control. Imagine this control diffusing throughout societal spheres affecting economic relations, political processes, domestic mundanities, and intimate interactions. Imagine codes cast upon publics who had, at best, a vague sense of those impositions which remained tacit and thereby difficult to pinpoint, let alone contest. Technologies inflict this kind of quiet control every day, structuring our lives without stating what rules are in play, whose values they reflect, or how they configure social practices.&nbsp;&nbsp;&nbsp;<\/p>\n\n\n\n<p><strong>Clearing the Fog: An \u2018Affordance\u2019 Approach<\/strong><\/p>\n\n\n\n<p>Over nearly a decade, I\u2019ve been constructing and applying a framework to render technical systems more transparent and understandable. That work centers around the concept of \u2018affordance\u2019, or how technologies enable, constrain, and set parameters of possibility. I wrote a<a href=\"https:\/\/journals.sagepub.com\/doi\/abs\/10.1177\/0270467617714944?journalCode=bsta\"> conceptual paper on the topic<\/a>,&nbsp; <a href=\"https:\/\/mitpress.mit.edu\/9780262044110\/how-artifacts-afford\/\">a book on the topic<\/a>, and most recently, applied an affordance lens to<a href=\"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3593013.3594000\"> machine learning technologies (ML)<\/a>. Across sectors, but especially in those that are hardest to penetrate due to technical and social opacities, specifying affordances is a political project of elucidation.<\/p>\n\n\n\n<p>My book, titled <em>How Artifacts Afford: The Power and Politics of Everyday Things, <\/em>presents the \u201cmechanisms and conditions framework of affordances.\u201d<em> <\/em>This framework operationalizes the relationship between technical features and social effects. The mechanisms of affordance are represented by a simple series of descriptors: <em>request, demand, encourage, discourage, refuse, <\/em>and <em>allow. <\/em>These indicate varying degrees of force exerted by a technology. For example, muted microphones on a debate stage <em>demand <\/em>adherence to turn-taking, whereas always-on mics <em>allow <\/em>interruption. These mechanisms are not absolute, but conditioned by social variables. For instance, a booming voice diminishes the <em>demand <\/em>into a <em>request, <\/em>unless rule violations are harshly penalized (e.g., disqualification), in which case the <em>demand <\/em>holds regardless of vocal capacities. (This information is also shared in an <a href=\"https:\/\/youtu.be\/5QN8WokJQ_Q?feature=shared\">explainer video<\/a>.)<\/p>\n\n\n\n<p><strong>Warehouse Work: A Case Example<\/strong><\/p>\n\n\n\n<p>My most recent work applies the mechanisms and conditions framework to machine learning (ML) technologies and the systems of which they are a part. This work considers what kinds of policies ML technologies enact by design. Elucidating ML is an urgent matter, as these technologies are increasingly integrated into personal and public life, yet remain murky in their operations due to impenetrable math and proprietary protections.<\/p>\n\n\n\n<p>Against this opacity, the mechanisms and conditions framework interrogates and articulates how ML applications reflect and affect socio-cultural patterns, for whom, and to what effect. Let\u2019s run through an example of data driven warehouse work, borrowed from my<a href=\"https:\/\/dl.acm.org\/doi\/abs\/10.1145\/3593013.3594000\"> 2023 paper on affordances for ML.<\/a>&nbsp; This example is based on Amazon Inc.\u2019s fulfillment centers, which act as anchor points for today\u2019s on-demand consumer economy.&nbsp;&nbsp;<\/p>\n\n\n\n<p>Documented by<a href=\"https:\/\/www.nytimes.com\/2021\/06\/15\/briefing\/amazon-warehouse-investigation.html\"> journalists<\/a> and<a href=\"https:\/\/www.plutobooks.com\/9780745342177\/the-warehouse\/\"> researchers<\/a>, Amazon warehouse work is fast-paced, tightly controlled, closely observed, and data-driven. Machine learning underpins nearly all workplace procedures. Data from physical sensors, digital monitors, GPS trackers, product scanners, customer ratings, and management evaluations\u2014among many other data sources\u2014process through ML algorithms to set a pace of work, break that work into granular pieces, dictate workers\u2019 movements, schedule workers\u2019 shifts, and hire or fire workers according to productivity metrics paired with fluctuations in supply and demand.<\/p>\n\n\n\n<p>The mechanisms and conditions framework clarifies how ML sets workplace policies. For example, GPS tracking <em>encourages <\/em>worker surveillance while feeding into data-hungry ML models. Those models in turn <em>demand <\/em>compliance with automated directives. These directives deskill warehouse work and thus <em>allow <\/em>management to replace any worker who misfits with corporate priorities.&nbsp; Data-driven rate-setting <em>requests <\/em>a standard speed of physical movement\u2014<em>discouraging<\/em> (or <em>refusing<\/em>) bodies that are ill, disabled, aging, or tired. Management, too, becomes data-dependent and disposable, affixed to systems that <em>demand <\/em>attention to metrics, <em>discourage<\/em> professional discretion, and <em>request<\/em> commitment to corporate goals, even when those goals conflict with their own and their colleagues\u2019 interests.<\/p>\n\n\n\n<p>The policies of Amazon warehouse work are technologically instilled, with ML models processing data into decree. Yet the shape of things can change. Just as the mechanisms and conditions framework reveals what <em>is<\/em>, it can also suggest alternatives. Drawing from US<a href=\"https:\/\/unionizeamazonkcvg.org\/what-were-fighting-for#:~:text=They%20only%20want%20to%20promote,a%20more%20dangerous%20work%20environment.\"> workers\u2019 union movements<\/a>, we might reimagine warehouse conditions in which only products, but never people, are tagged and tracked, <em>demanding <\/em>the monitor of goods while <em>discouraging <\/em>infrastructures of surveillance; we might recalibrate work rates, <em>allowing <\/em>bodily diversity while <em>encouraging <\/em>employee wellness; and we might envisage scheduling algorithms that incorporate worker needs, setting shifts in advance while offering live swaps between workers who need personal time or paid hours, respectively,<em> requesting <\/em>a balance of stability and flexibility for whole human beings with competing obligations.&nbsp;<\/p>\n\n\n\n<p><strong>Beyond the Warehouse Walls<\/strong><\/p>\n\n\n\n<p>Algorithmic governance of warehouse work is but one case example among many possible others\u2014think dating apps, r\u00e9sum\u00e9 sorting, insurance approvals, and domestic bots. Across social spheres, the mechanisms and conditions framework bolsters transparency, surfacing techno-policies so they can be scrutinized, challenged, reimagined, and remade. This is a general-purpose framework, ready for application across many and diverse domains.<\/p>\n","protected":false},"featured_media":99459,"template":"","content_type":[7251,10206],"area-of-focus":[834,10212],"class_list":["post-105833","insight","type-insight","status-publish","has-post-thumbnail","hentry","content_type-blog","content_type-cdt-fellows-content","area-of-focus-ai-policy-governance","area-of-focus-cdtresearch"],"acf":[],"_links":{"self":[{"href":"https:\/\/cdt.org\/wp-json\/wp\/v2\/insight\/105833","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/cdt.org\/wp-json\/wp\/v2\/insight"}],"about":[{"href":"https:\/\/cdt.org\/wp-json\/wp\/v2\/types\/insight"}],"version-history":[{"count":6,"href":"https:\/\/cdt.org\/wp-json\/wp\/v2\/insight\/105833\/revisions"}],"predecessor-version":[{"id":105962,"href":"https:\/\/cdt.org\/wp-json\/wp\/v2\/insight\/105833\/revisions\/105962"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/cdt.org\/wp-json\/wp\/v2\/media\/99459"}],"wp:attachment":[{"href":"https:\/\/cdt.org\/wp-json\/wp\/v2\/media?parent=105833"}],"wp:term":[{"taxonomy":"content_type","embeddable":true,"href":"https:\/\/cdt.org\/wp-json\/wp\/v2\/content_type?post=105833"},{"taxonomy":"area-of-focus","embeddable":true,"href":"https:\/\/cdt.org\/wp-json\/wp\/v2\/area-of-focus?post=105833"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}