<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:atom="http://www.w3.org/2005/Atom" version="2.0">
  <channel>
    <docs>http://www.rssboard.org/rss-specification</docs>
    <atom:link rel="self" type="application/rss+xml" href="https://escholarship.org/uc/uclalaw_pulse_papers/rss"/>
    <ttl>720</ttl>
    <title>Recent uclalaw_pulse_papers items</title>
    <link>https://escholarship.org/uc/uclalaw_pulse_papers/rss</link>
    <description>Recent eScholarship items from AI PULSE Papers</description>
    <pubDate>Fri, 15 May 2026 07:27:43 +0000</pubDate>
    <item>
      <title>Max – A Thought Experiment: Could AI Run the Economy Better Than Markets?</title>
      <link>https://escholarship.org/uc/item/4sg4m848</link>
      <description>&lt;p&gt;One of the fundamental critiques against twentieth century experiments in central economic planning, and the main reason for their failures, was the inability of humandirected planning systems to manage the data gathering, analysis, computation, and control necessary to direct the vast complexity of production, allocation, and exchange decisions that make up a modern economy. Rapid recent advances in AI, data, and related technological capabilities have re-opened that old question, and provoked vigorous speculation about the feasibility, benefits, and threats of an AI-directed economy. This paper presents a thought experiment about how this might work, based on assuming a powerful AI agent (whimsically named “Max”) with no binding computational or algorithmic limits on its (his) ability to do the task. The paper’s novel contribution is to make this hitherto under-specified question more concrete and specific. It reasons concretely through how such a system might work under...</description>
      <guid isPermaLink="true">https://escholarship.org/uc/item/4sg4m848</guid>
      <pubDate>Mon, 16 Mar 2020 00:00:00 +0000</pubDate>
      <author>
        <name>Parson, Edward A.</name>
      </author>
    </item>
    <item>
      <title>AI Without Math: Making AI and ML Comprehensible</title>
      <link>https://escholarship.org/uc/item/9rk482zq</link>
      <description>If we want nontechnical stakeholders to respond to artificial intelligence developments in an informed way, we must help them acquire a more-than-superficial understanding of artificial intelligence (AI) and machine learning (ML). Explanations involving formal mathematical notation will not reach most people who need to make informed decisions about AI. We believe it is possible to teach many AI and ML concepts without slipping into mathematical notation.</description>
      <guid isPermaLink="true">https://escholarship.org/uc/item/9rk482zq</guid>
      <pubDate>Thu, 26 Sep 2019 00:00:00 +0000</pubDate>
      <author>
        <name>MCarl, Ryan</name>
      </author>
      <author>
        <name>Lobana, Jodie</name>
      </author>
      <author>
        <name>von Stackelberg, Heather</name>
      </author>
      <author>
        <name>Schell, Kristen</name>
      </author>
      <author>
        <name>Leschchinskiy, Brandon</name>
      </author>
      <author>
        <name>Humaidan, Dania</name>
      </author>
      <author>
        <name>Singh, Gursimran</name>
      </author>
    </item>
    <item>
      <title>Siri Humphrey: Design Principles for an AI Policy Analyst</title>
      <link>https://escholarship.org/uc/item/95735485</link>
      <description>This workgroup considered whether the policy analysis function in government could be replaced by an artificial intelligence policy analyst (AIPA) that responds directly to requests for information and decision support from political and administrative leaders. We describe the current model for policy analysis, identify the design criteria for an AIPA, and consider its limitations should it be adopted. A core limitation is the essential human interaction between a decision maker and an analyst/advisor, which extends the meaning and purpose of policy analysis beyond a simple synthesis or technical analysis view (each of which is nonetheless a complex task in its own right). Rather than propose a wholesale replacement of policy analysts with AIPA, we reframe the question focussing on the use of AI by human policy analysts for augmenting their current work, what we term intelligence-amplified policy analysis (IAPA). We conclude by considering how policy analysts, schools of public...</description>
      <guid isPermaLink="true">https://escholarship.org/uc/item/95735485</guid>
      <pubDate>Thu, 26 Sep 2019 00:00:00 +0000</pubDate>
      <author>
        <name>Armstrong, Ben</name>
      </author>
      <author>
        <name>Beretta, Megan</name>
      </author>
      <author>
        <name>Crothers, Evan</name>
      </author>
      <author>
        <name>Karlin, Michael</name>
      </author>
      <author>
        <name>Kim, Dongwoo</name>
      </author>
      <author>
        <name>Longo, Justin</name>
      </author>
      <author>
        <name>Powell, Lorne</name>
      </author>
      <author>
        <name>Sanders, Trooper</name>
      </author>
    </item>
    <item>
      <title>AI &amp;amp; Agency</title>
      <link>https://escholarship.org/uc/item/8q15786s</link>
      <description>&lt;p&gt;In July of 2019, at the Summer Institute on AI and Society in Edmonton, Canada (co-sponsored by CIFAR and the AI Pulse Project of UCLA Law), scholars from across disciplines came together in an intensive workshop. For the second half of the workshop, the cohort split into smaller working groups to delve into specific topics related to AI and Society.&lt;/p&gt;&lt;p&gt;I proposed deeper exploration on the topic of “agency,” which is defined differently across domains and cultures, and relates to many of the topics of discussion in AI ethics, including responsibility and accountability. It is also the subject of an ongoing art and research project I’m producing. As a group, we looked at definitions of agency across fields, found paradoxes and incongruities, shared our own questions, and produced a visual map of the conceptual space. We decided that our disparate perspectives were better articulated through a collection of short written pieces, presented as a set, rather than a singular essay...</description>
      <guid isPermaLink="true">https://escholarship.org/uc/item/8q15786s</guid>
      <pubDate>Thu, 26 Sep 2019 00:00:00 +0000</pubDate>
      <author>
        <name>Newman, Sarah</name>
      </author>
      <author>
        <name>Birhane, Abeda</name>
      </author>
      <author>
        <name>Zajko, Mike</name>
      </author>
      <author>
        <name>Osoba, Osonde A.</name>
      </author>
      <author>
        <name>Prunkl, Carina</name>
      </author>
      <author>
        <name>Lima, Gabriel</name>
      </author>
      <author>
        <name>Bowen, Jon</name>
      </author>
      <author>
        <name>Sutton, Rich</name>
      </author>
      <author>
        <name>Adams, Cathy</name>
      </author>
    </item>
    <item>
      <title>Creating a Tool to Reproducibly Estimate the Ethical Impact of Artificial Intelligence</title>
      <link>https://escholarship.org/uc/item/56w756v8</link>
      <description>How can an organization systematically and reproducibly measure the ethical impact of its AI-enabled platforms? Organizations that create applications enhanced by artificial intelligence and machine learning (AI/ML) are increasingly asked to review the ethical impact of their work. Governance and oversight organizations are increasingly asked to provide documentation to guide the conduct of ethical impact assessments. This document outlines a draft procedure for organizations to evaluate the ethical impacts of their work. We propose that ethical impact can be evaluated via a principles-based approach when the effects of platforms’ probable uses are interrogated through informative questions, with answers scaled and weighted to produce a multi-layered score. We initially assess ethical impact as the summed score of a project’s potential to protect human rights. However, we do not suggest that the ethical impact of platforms is assessed exclusively through preservation of human...</description>
      <guid isPermaLink="true">https://escholarship.org/uc/item/56w756v8</guid>
      <pubDate>Thu, 26 Sep 2019 00:00:00 +0000</pubDate>
      <author>
        <name>Jordan, Sara</name>
      </author>
      <author>
        <name>Fazelpour, Sina</name>
      </author>
      <author>
        <name>Koshiyama, Adriano</name>
      </author>
      <author>
        <name>Kueper, Jaky</name>
      </author>
      <author>
        <name>DeChant, Chad</name>
      </author>
      <author>
        <name>Leong, Brenda</name>
      </author>
      <author>
        <name>Marchant, Gary</name>
      </author>
      <author>
        <name>Shank, Craig</name>
      </author>
    </item>
    <item>
      <title>On Meaningful Human Control in High-Stakes Machine-Human Partnerships</title>
      <link>https://escholarship.org/uc/item/38q4b3z4</link>
      <description>Our team at the Summer Institute was diverse in both skills (including technical computer science, cognitive science, systems innovation, and radiology expertise) and career stage (including faculty, graduate students, and a medical student). We were brought together at the ‘pitch’ stage by a mutual interest in human-machine partnerships in complex, high-stakes domains such as healthcare, transport, and autonomous weapons. We began with a focus on the topic of “meaningful human control” – a term most often applied in the autonomous weapons literature, which refers broadly to human participation in the deployment and operation of potentially autonomous artificial intelligence (AI) systems, such that the human has a meaningful contribution to decisions and outcomes.</description>
      <guid isPermaLink="true">https://escholarship.org/uc/item/38q4b3z4</guid>
      <pubDate>Thu, 26 Sep 2019 00:00:00 +0000</pubDate>
      <author>
        <name>McCoy, Liam</name>
      </author>
      <author>
        <name>Burkell, Jacquelyn</name>
      </author>
      <author>
        <name>Card, Dallas</name>
      </author>
      <author>
        <name>Davis, Brent</name>
      </author>
      <author>
        <name>Gichoya, Judy</name>
      </author>
      <author>
        <name>LePage, Sophie</name>
      </author>
      <author>
        <name>Madras, David</name>
      </author>
    </item>
    <item>
      <title>Artificial Intelligence’s Societal Impacts, Governance, and Ethics: Introduction to the 2019 Summer Institute on AI and Society and its Rapid Outputs</title>
      <link>https://escholarship.org/uc/item/2gp9314r</link>
      <description>The works assembled here are the initial outputs of the First International Summer Institute on Artificial Intelligence and Society (SAIS). The Summer Institute was convened from July 21 to 24, 2019 at the Alberta Machine Intelligence Institute (Amii) in Edmonton, in conjunction with the 2019 Deep Learning/Reinforcement Learning Summer School. The Summer Institute was jointly sponsored by the AI Pulse project of the UCLA School of Law (funded by a generous grant from the Open Philanthropy Project) and the Canadian Institute for Advanced Research (CIFAR), and was coorganized by Ted Parson (UCLA School of Law), Alona Fyshe (University of Alberta and Amii), and Dan Lizotte (University of Western Ontario). The Summer Institute brought together a distinguished international group of 80 researchers, professionals, and advanced students from a wide range of disciplines and areas of expertise, for three days of intensive mutual instruction and collaborative work on the societal implications...</description>
      <guid isPermaLink="true">https://escholarship.org/uc/item/2gp9314r</guid>
      <pubDate>Thu, 26 Sep 2019 00:00:00 +0000</pubDate>
      <author>
        <name>Parson, Edward</name>
      </author>
      <author>
        <name>Fyshe, Alona</name>
      </author>
      <author>
        <name>Lizotte, Dan</name>
      </author>
    </item>
    <item>
      <title>From Shortcut to Sleight of Hand: Why the Checklist Approach in the EU Guidelines Does Not Work</title>
      <link>https://escholarship.org/uc/item/12s9x39n</link>
      <description>&lt;p&gt;In April 2019, the High-Level Expert Group on Artificial Intelligence (AI) nominated by the EU Commission presented “Ethics Guidelines for Trustworthy Artificial Intelligence,” followed in June 2019 by a second “Policy and investment recommendations” Document.&lt;/p&gt;&lt;p&gt;The Guidelines establish three characteristics (lawful, ethical, and robust) and seven key requirements (Human agency and oversight; Technical Robustness and safety; Privacy and data governance; Transparency; Diversity, non-discrimination and fairness; Societal and environmental well-being; and Accountability) that the development of AI should follow.&lt;/p&gt;&lt;p&gt;The Guidelines are of utmost significance for the international debate over the regulation of AI. Firstly, they aspire to set a universal standard of care for the development of AI in the future. Secondly, they have been developed within a group of experts nominated by a regulatory body, and therefore will shape the normative approach in the EU regulation of...</description>
      <guid isPermaLink="true">https://escholarship.org/uc/item/12s9x39n</guid>
      <pubDate>Thu, 26 Sep 2019 00:00:00 +0000</pubDate>
      <author>
        <name>Rockwell, Geoffrey</name>
      </author>
      <author>
        <name>Black, Emily</name>
      </author>
      <author>
        <name>Selinger, Evan</name>
      </author>
      <author>
        <name>Davola, Antonio</name>
      </author>
      <author>
        <name>Seide, Elana</name>
      </author>
      <author>
        <name>Gulson, Kalervo</name>
      </author>
    </item>
    <item>
      <title>Could AI Drive Transformative Social Progress? What Would This Require?</title>
      <link>https://escholarship.org/uc/item/0xj3356j</link>
      <description>The potential societal impacts of artificial intelligence (AI) and related technologiesare so vast, they are often likened to those of past transformative technologicalchanges such as the industrial or agricultural revolutions. They are also deeplyuncertain, presenting a wide range of possibilities for good or ill – as indeed thediverse technologies lumped under the term AI are themselves diffuse, labile, anduncertain. Speculation about AI’s broad social impacts ranges from full-on utopia todystopia, both in fictional and non-fiction accounts. Narrowing the field of view fromaggregate impacts to particular impacts and their mechanisms, there is substantial(but far from total) agreement on some – e.g., profound disruption of labor markets,with the prospect of unemployment that is novel in scale and breadth – but greatuncertainty on others, even as to sign. Will AI concentrate or distribute economicand political power – and if concentrate, then in whom? Will it make human lives...</description>
      <guid isPermaLink="true">https://escholarship.org/uc/item/0xj3356j</guid>
      <pubDate>Thu, 26 Sep 2019 00:00:00 +0000</pubDate>
      <author>
        <name>Parson, Edward</name>
      </author>
      <author>
        <name>Lempert, Robert</name>
      </author>
      <author>
        <name>Armstrong, Ben</name>
      </author>
      <author>
        <name>Crothers, Evan</name>
      </author>
      <author>
        <name>DeChant, Chad</name>
      </author>
      <author>
        <name>Novelli, Nick</name>
      </author>
    </item>
    <item>
      <title>Mob.ly App Makes Driving Safer by Changing How Drivers Navigate</title>
      <link>https://escholarship.org/uc/item/0hr7j0cv</link>
      <description>A group of multi-disciplinary researchers from across North America today announced the launch of a new app, Mob.ly, that reduces the incidents of road rage by promoting a driver’s sense of well-being and safety without sacrificing efficiency and access.</description>
      <guid isPermaLink="true">https://escholarship.org/uc/item/0hr7j0cv</guid>
      <pubDate>Thu, 26 Sep 2019 00:00:00 +0000</pubDate>
      <author>
        <name>Millar, Jason</name>
      </author>
    </item>
    <item>
      <title>Artificial Intelligence in Strategic Context: An Introduction</title>
      <link>https://escholarship.org/uc/item/9c8651s6</link>
      <description>Artificial intelligence (AI), particularly various methods of machine learning (ML), has achieved landmark advances over the past few years in applications as diverse as playing complex games, language processing, speech recognition and synthesis, image identification, and facial recognition. These breakthroughs have brought a surge of popular, journalistic, and policy attention to the field, including both excitement about anticipated advances and the benefits they promise, and concern about societal impacts and risks – potentially arising through whatever combination of accident, malicious or reckless use, or just social and political disruption from the scale and rapidity of change.</description>
      <guid isPermaLink="true">https://escholarship.org/uc/item/9c8651s6</guid>
      <pubDate>Fri, 8 Mar 2019 00:00:00 +0000</pubDate>
      <author>
        <name>Parson, Edward</name>
      </author>
      <author>
        <name>Re, Richard</name>
      </author>
      <author>
        <name>Solow-Niederman, Alicia</name>
      </author>
      <author>
        <name>Zeide, Elana</name>
      </author>
    </item>
    <item>
      <title>One Shot Learning In AI Innovation</title>
      <link>https://escholarship.org/uc/item/7f75n1d6</link>
      <description>Modern algorithmic design far exceeds the limits of human cognition in many ways.Armed with large data sets, programmers promise that their algorithms can betterpredict which prisoners are most likely to recidivate and where future crimes arelikely to occur. Software designers further hope to use large data sets to uncoverrelationships between genes and disease that would take human researchers muchlonger to identify.</description>
      <guid isPermaLink="true">https://escholarship.org/uc/item/7f75n1d6</guid>
      <pubDate>Fri, 8 Mar 2019 00:00:00 +0000</pubDate>
      <author>
        <name>Ram, Natalie</name>
      </author>
    </item>
    <item>
      <title>Genetically Modified Organisms: A Precautionary Tale for AI Governance </title>
      <link>https://escholarship.org/uc/item/6pc0k5v8</link>
      <description>The fruits of a long anticipated technology finally hit the market, with promise to extend human life, revolutionize production, improve consumer welfare, reduce poverty, and inspire countless yet-imagined innovations. A marvel of science and engineering, it reflects the cumulative efforts of a generation of researchers backed by research funding from the U.S. government and private sector investments in (predominantly American) technology companies. Though most scientists and policy elites consider the fruits of this technology to be safe, and the technology itself as a game-changer, there is still widespread acknowledgment that certain applications raise deeply challenging ethical issues, with some commentators even warning that careless or malicious applications could cause planet-wide catastrophes. Indeed, the technology has long been a fixture of science fiction, as an antagonist in allegories about hubris and science run amok—a narrative not lost on policy makers in the...</description>
      <guid isPermaLink="true">https://escholarship.org/uc/item/6pc0k5v8</guid>
      <pubDate>Fri, 8 Mar 2019 00:00:00 +0000</pubDate>
      <author>
        <name>Grotto, Andrew</name>
      </author>
    </item>
    <item>
      <title>Bezos World Or Levelers: Can We Choose Our Scenario?</title>
      <link>https://escholarship.org/uc/item/50v7196x</link>
      <description>Artificial intelligence (AI) augurs changes in society at least as large as those of the industrial revolution.  But much of the policy debate seems narrow – extrapolating current trends and asking how we might manage their rough edges.  This essay instead explores how AI might be used to enable fundamentally different future worlds and how one such future might be enabled by AI algorithms with different goals and functions than those most common today.</description>
      <guid isPermaLink="true">https://escholarship.org/uc/item/50v7196x</guid>
      <pubDate>Fri, 8 Mar 2019 00:00:00 +0000</pubDate>
      <author>
        <name>Lempert, Robert</name>
      </author>
    </item>
    <item>
      <title>Autonomous Weapons And Coercive Threats</title>
      <link>https://escholarship.org/uc/item/2zx599j8</link>
      <description>Governments across the globe have been quick to adapt developments in artificial intelligence to military technologies. Prominent among the many changes recently introduced, autonomous weapon systems pose important new questions for our understanding of conflict generally, and coercive diplomacy in particular. These weapons dramatically decrease the cost of employing military force, in human terms on the battlefield, in financial and material terms, and in political terms for leaders who choose to pursue conflict. In this article, we analyze the implications of these new weapons for coercive diplomacy, exploring how they will influence the course of international crises. We argue that drones have different implications for relationships between relatively equal states than they do for unbalanced relationships where one state vastly overpowers the other. In asymmetric relationships, these weapons exaggerate existing power disparities. In these cases, the strong state is able to...</description>
      <guid isPermaLink="true">https://escholarship.org/uc/item/2zx599j8</guid>
      <pubDate>Fri, 8 Mar 2019 00:00:00 +0000</pubDate>
      <author>
        <name>Sterbenz, Ciara</name>
      </author>
      <author>
        <name>Trager, Robert</name>
      </author>
    </item>
    <item>
      <title>Technocultural Pluralism</title>
      <link>https://escholarship.org/uc/item/26s3d0mz</link>
      <description>t the end of the Cold War, the renowned political scientist, Samuel Huntington, argued that future conflicts were more likely to stem from cultural frictions– ideologies, social norms, and political systems– rather than political or economic frictions. Huntington focused his concern on the future of geopolitics in a rapidly shrinking world. But his argument applies as forcefully (if not more) to the interaction of technocultures.</description>
      <guid isPermaLink="true">https://escholarship.org/uc/item/26s3d0mz</guid>
      <pubDate>Fri, 8 Mar 2019 00:00:00 +0000</pubDate>
      <author>
        <name>Osoba, Osonde A.</name>
      </author>
    </item>
    <item>
      <title>The Algorithm Dispositif (Notes Towards An Investigation)</title>
      <link>https://escholarship.org/uc/item/154618gr</link>
      <description>&lt;p&gt;How can we speak of algorithms as political?&lt;/p&gt;&lt;p&gt;The intuitive answer disposes us to presume that algorithms are not political. They are mathematical functions that operate to accomplish specific tasks. In this regard, algorithms operate independently of a specific belief system or of any one system’s ideological ambitions. They may be used for political ends, in the manner in which census data may be used for voter redistricting, but in and of themselves algorithms don’t do anything political.&lt;/p&gt;</description>
      <guid isPermaLink="true">https://escholarship.org/uc/item/154618gr</guid>
      <pubDate>Fri, 8 Mar 2019 00:00:00 +0000</pubDate>
      <author>
        <name>Panagia, Davide</name>
      </author>
    </item>
    <item>
      <title>“Soft Law” Governance of Artificial Intelligence</title>
      <link>https://escholarship.org/uc/item/0jq252ks</link>
      <description>&lt;p&gt;On November 26, 2017, Elon Musk tweeted: “Got to regulate AI/robotics like we do food, drugs, aircraft &amp;amp; cars. Public risks require public oversight. Getting rid of the FAA wdn’t [sic] make flying safer. They’re there for good reason.”&lt;/p&gt;&lt;p&gt;In this and other recent pronouncements, Musk is calling for artificial intelligence (AI) to be regulated by traditional regulation, just as we regulate foods, drugs, aircraft and cars. Putting aside the quibble that food, drugs, aircraft and cars are each regulated very differently, these calls for regulation seem to envision one or more federal regulatory agencies adopting binding regulations to ensure the safety of AI. Musk is not alone in calling for “regulation” of AI, and some serious AI scholars and policymakers have likewise called for regulation of AI using traditional governmental regulatory approaches .&lt;/p&gt;</description>
      <guid isPermaLink="true">https://escholarship.org/uc/item/0jq252ks</guid>
      <pubDate>Fri, 8 Mar 2019 00:00:00 +0000</pubDate>
      <author>
        <name>Marchant, Gary</name>
      </author>
    </item>
  </channel>
</rss>
