The Perils of Objectivity: Towards a Normative Framework for Fair Judicial Decision-Making

Andi Peng, Malina Simard-Halm
{"title":"The Perils of Objectivity: Towards a Normative Framework for Fair Judicial Decision-Making","authors":"Andi Peng, Malina Simard-Halm","doi":"10.1145/3375627.3375869","DOIUrl":null,"url":null,"abstract":"Fair decision-making in criminal justice relies on the recognition and incorporation of infinite shades of grey. In this paper, we detail how algorithmic risk assessment tools are counteractive to fair legal proceedings in social institutions where desired states of the world are contested ethically and practically. We provide a normative framework for assessing fair judicial decision-making, one that does not seek the elimination of human bias from decision-making as algorithmic fairness efforts currently focus on, but instead centers on sophisticating the incorporation of individualized or discretionary bias--a process that is requisitely human. Through analysis of a case study on social disadvantage, we use this framework to provide an assessment of potential features of consideration, such as political disempowerment and demographic exclusion, that are irreconcilable by current algorithmic efforts and recommend their incorporation in future reform.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"27 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2020-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3375627.3375869","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Fair decision-making in criminal justice relies on the recognition and incorporation of infinite shades of grey. In this paper, we detail how algorithmic risk assessment tools are counteractive to fair legal proceedings in social institutions where desired states of the world are contested ethically and practically. We provide a normative framework for assessing fair judicial decision-making, one that does not seek the elimination of human bias from decision-making as algorithmic fairness efforts currently focus on, but instead centers on sophisticating the incorporation of individualized or discretionary bias--a process that is requisitely human. Through analysis of a case study on social disadvantage, we use this framework to provide an assessment of potential features of consideration, such as political disempowerment and demographic exclusion, that are irreconcilable by current algorithmic efforts and recommend their incorporation in future reform.
客观的危险:走向公正司法决策的规范框架
刑事司法中的公平决策依赖于对无限灰色地带的认识和纳入。在本文中,我们详细介绍了算法风险评估工具如何在社会机构中对公平的法律程序产生反作用,在社会机构中,期望的世界状态在道德和实践上受到质疑。我们提供了一个评估公平司法决策的规范框架,这个框架不寻求消除决策中的人类偏见,这是目前算法公平努力的重点,而是集中在完善个性化或自由裁量的偏见的结合上——这是一个必要的人类过程。通过对社会劣势案例研究的分析,我们使用这一框架对当前算法无法调和的潜在特征(如政治剥夺权力和人口排斥)进行评估,并建议将其纳入未来的改革中。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信