{"title":"当人类不同意结果时的算法决策","authors":"Kiel Brennan-Marquez, Vincent Chiao","doi":"10.1525/nclr.2021.24.3.275","DOIUrl":null,"url":null,"abstract":"Which interpretive tasks should be delegated to machines? This question has become a focal point of “tech governance” debates. One familiar answer is that while machines are capable of implementing tasks whose ends are uncontroversial, machine delegation is inappropriate for tasks that elude human consensus. After all, if human experts cannot agree about the nature of a task, what hope is there for machines?\n Here, we turn this position around. When humans disagree about the nature of a task, that should be prima facie grounds for machine delegation, not against it. The reason has to do with fairness: affected parties should be able to predict the outcomes of particular cases. Indeterminate decision-making environments—those in which human disagree about ends—are inherently unpredictable in that, for any given case, the distribution of likely outcomes will depend on a specific decision maker’s view of the relevant end. This injects an irreducible dynamic of randomization into the decision-making process from the perspective of non-repeat players. To the extent machine decisions aggregate across disparate views of a task’s relevant ends, they promise improvement on this specific dimension of predictability. Whatever the other virtues and drawbacks of machine decision-making, this gain should be recognized and factored into governance.\n The essay has two parts. In the first, we draw a distinction between determinacy and certainty as epistemic properties and fashioning a taxonomy of decision types. In the second part, we bring the formal point alive through a case study of criminal sentencing.","PeriodicalId":44796,"journal":{"name":"New Criminal Law Review","volume":"23 1","pages":""},"PeriodicalIF":0.4000,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Algorithmic Decision-Making When Humans Disagree on Ends\",\"authors\":\"Kiel Brennan-Marquez, Vincent Chiao\",\"doi\":\"10.1525/nclr.2021.24.3.275\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Which interpretive tasks should be delegated to machines? This question has become a focal point of “tech governance” debates. One familiar answer is that while machines are capable of implementing tasks whose ends are uncontroversial, machine delegation is inappropriate for tasks that elude human consensus. After all, if human experts cannot agree about the nature of a task, what hope is there for machines?\\n Here, we turn this position around. When humans disagree about the nature of a task, that should be prima facie grounds for machine delegation, not against it. The reason has to do with fairness: affected parties should be able to predict the outcomes of particular cases. Indeterminate decision-making environments—those in which human disagree about ends—are inherently unpredictable in that, for any given case, the distribution of likely outcomes will depend on a specific decision maker’s view of the relevant end. This injects an irreducible dynamic of randomization into the decision-making process from the perspective of non-repeat players. To the extent machine decisions aggregate across disparate views of a task’s relevant ends, they promise improvement on this specific dimension of predictability. Whatever the other virtues and drawbacks of machine decision-making, this gain should be recognized and factored into governance.\\n The essay has two parts. In the first, we draw a distinction between determinacy and certainty as epistemic properties and fashioning a taxonomy of decision types. In the second part, we bring the formal point alive through a case study of criminal sentencing.\",\"PeriodicalId\":44796,\"journal\":{\"name\":\"New Criminal Law Review\",\"volume\":\"23 1\",\"pages\":\"\"},\"PeriodicalIF\":0.4000,\"publicationDate\":\"2021-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"New Criminal Law Review\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1525/nclr.2021.24.3.275\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"Social Sciences\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"New Criminal Law Review","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1525/nclr.2021.24.3.275","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"Social Sciences","Score":null,"Total":0}
Algorithmic Decision-Making When Humans Disagree on Ends
Which interpretive tasks should be delegated to machines? This question has become a focal point of “tech governance” debates. One familiar answer is that while machines are capable of implementing tasks whose ends are uncontroversial, machine delegation is inappropriate for tasks that elude human consensus. After all, if human experts cannot agree about the nature of a task, what hope is there for machines?
Here, we turn this position around. When humans disagree about the nature of a task, that should be prima facie grounds for machine delegation, not against it. The reason has to do with fairness: affected parties should be able to predict the outcomes of particular cases. Indeterminate decision-making environments—those in which human disagree about ends—are inherently unpredictable in that, for any given case, the distribution of likely outcomes will depend on a specific decision maker’s view of the relevant end. This injects an irreducible dynamic of randomization into the decision-making process from the perspective of non-repeat players. To the extent machine decisions aggregate across disparate views of a task’s relevant ends, they promise improvement on this specific dimension of predictability. Whatever the other virtues and drawbacks of machine decision-making, this gain should be recognized and factored into governance.
The essay has two parts. In the first, we draw a distinction between determinacy and certainty as epistemic properties and fashioning a taxonomy of decision types. In the second part, we bring the formal point alive through a case study of criminal sentencing.
期刊介绍:
Focused on examinations of crime and punishment in domestic, transnational, and international contexts, New Criminal Law Review provides timely, innovative commentary and in-depth scholarly analyses on a wide range of criminal law topics. The journal encourages a variety of methodological and theoretical approaches and is a crucial resource for criminal law professionals in both academia and the criminal justice system. The journal publishes thematic forum sections and special issues, full-length peer-reviewed articles, book reviews, and occasional correspondence.