Algorithmic Decision-Making When Humans Disagree on Ends

IF 0.4 Q2 Social Sciences
Kiel Brennan-Marquez, Vincent Chiao
{"title":"Algorithmic Decision-Making When Humans Disagree on Ends","authors":"Kiel Brennan-Marquez, Vincent Chiao","doi":"10.1525/nclr.2021.24.3.275","DOIUrl":null,"url":null,"abstract":"Which interpretive tasks should be delegated to machines? This question has become a focal point of “tech governance” debates. One familiar answer is that while machines are capable of implementing tasks whose ends are uncontroversial, machine delegation is inappropriate for tasks that elude human consensus. After all, if human experts cannot agree about the nature of a task, what hope is there for machines?\n Here, we turn this position around. When humans disagree about the nature of a task, that should be prima facie grounds for machine delegation, not against it. The reason has to do with fairness: affected parties should be able to predict the outcomes of particular cases. Indeterminate decision-making environments—those in which human disagree about ends—are inherently unpredictable in that, for any given case, the distribution of likely outcomes will depend on a specific decision maker’s view of the relevant end. This injects an irreducible dynamic of randomization into the decision-making process from the perspective of non-repeat players. To the extent machine decisions aggregate across disparate views of a task’s relevant ends, they promise improvement on this specific dimension of predictability. Whatever the other virtues and drawbacks of machine decision-making, this gain should be recognized and factored into governance.\n The essay has two parts. In the first, we draw a distinction between determinacy and certainty as epistemic properties and fashioning a taxonomy of decision types. In the second part, we bring the formal point alive through a case study of criminal sentencing.","PeriodicalId":44796,"journal":{"name":"New Criminal Law Review","volume":"23 1","pages":""},"PeriodicalIF":0.4000,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"New Criminal Law Review","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1525/nclr.2021.24.3.275","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"Social Sciences","Score":null,"Total":0}
引用次数: 1

Abstract

Which interpretive tasks should be delegated to machines? This question has become a focal point of “tech governance” debates. One familiar answer is that while machines are capable of implementing tasks whose ends are uncontroversial, machine delegation is inappropriate for tasks that elude human consensus. After all, if human experts cannot agree about the nature of a task, what hope is there for machines? Here, we turn this position around. When humans disagree about the nature of a task, that should be prima facie grounds for machine delegation, not against it. The reason has to do with fairness: affected parties should be able to predict the outcomes of particular cases. Indeterminate decision-making environments—those in which human disagree about ends—are inherently unpredictable in that, for any given case, the distribution of likely outcomes will depend on a specific decision maker’s view of the relevant end. This injects an irreducible dynamic of randomization into the decision-making process from the perspective of non-repeat players. To the extent machine decisions aggregate across disparate views of a task’s relevant ends, they promise improvement on this specific dimension of predictability. Whatever the other virtues and drawbacks of machine decision-making, this gain should be recognized and factored into governance. The essay has two parts. In the first, we draw a distinction between determinacy and certainty as epistemic properties and fashioning a taxonomy of decision types. In the second part, we bring the formal point alive through a case study of criminal sentencing.
当人类不同意结果时的算法决策
哪些口译任务应该委托给机器?这个问题已经成为“技术治理”辩论的焦点。一个熟悉的答案是,虽然机器能够实现目标没有争议的任务,但机器授权不适合人类无法达成共识的任务。毕竟,如果人类专家不能就一项任务的性质达成一致,机器还有什么希望呢?这里,我们把这个位置转过来。当人类对一项任务的性质不同意时,这应该是机器授权的初步理由,而不是反对它。原因与公平有关:受影响的当事人应该能够预测具体案件的结果。不确定的决策环境——即人们对目标不一致的环境——在本质上是不可预测的,因为在任何给定的情况下,可能结果的分布将取决于特定决策者对相关目标的看法。从非重复玩家的角度来看,这为决策过程注入了不可减少的随机化动态。在某种程度上,机器的决策汇集了任务相关终点的不同观点,它们承诺在可预测性的这个特定维度上有所改进。无论机器决策的其他优点和缺点是什么,都应该认识到这一点,并将其纳入治理。这篇文章分为两部分。在第一章中,我们将确定性和确定性作为认知属性进行区分,并形成决策类型的分类。在第二部分中,我们通过对刑事量刑的个案研究,使形式观点生动起来。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
期刊介绍: Focused on examinations of crime and punishment in domestic, transnational, and international contexts, New Criminal Law Review provides timely, innovative commentary and in-depth scholarly analyses on a wide range of criminal law topics. The journal encourages a variety of methodological and theoretical approaches and is a crucial resource for criminal law professionals in both academia and the criminal justice system. The journal publishes thematic forum sections and special issues, full-length peer-reviewed articles, book reviews, and occasional correspondence.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信