法典即法律:COMPAS 如何影响司法机构处理累犯风险的方式

IF 3.1 2区 社会学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Christoph Engel, Lorenz Linhardt, Marcel Schubert
{"title":"法典即法律:COMPAS 如何影响司法机构处理累犯风险的方式","authors":"Christoph Engel,&nbsp;Lorenz Linhardt,&nbsp;Marcel Schubert","doi":"10.1007/s10506-024-09389-8","DOIUrl":null,"url":null,"abstract":"<div><p>Judges in multiple US states, such as New York, Pennsylvania, Wisconsin, California, and Florida, receive a prediction of defendants’ recidivism risk, generated by the COMPAS algorithm. If judges act on these predictions, they implicitly delegate normative decisions to proprietary software, even beyond the previously documented race and age biases. Using the ProPublica dataset, we demonstrate that COMPAS predictions favor jailing over release. COMPAS is biased against defendants. We show that this bias can largely be removed. Our proposed correction increases overall accuracy, and attenuates anti-black and anti-young bias. However, it also slightly increases the risk that defendants are released who commit a new crime before tried. We argue that this normative decision should not be buried in the code. The tradeoff between the interests of innocent defendants and of future victims should not only be made transparent. The algorithm should be changed such that the legislator and the courts do make this choice.</p></div>","PeriodicalId":51336,"journal":{"name":"Artificial Intelligence and Law","volume":"33 2","pages":"383 - 404"},"PeriodicalIF":3.1000,"publicationDate":"2024-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10506-024-09389-8.pdf","citationCount":"0","resultStr":"{\"title\":\"Code is law: how COMPAS affects the way the judiciary handles the risk of recidivism\",\"authors\":\"Christoph Engel,&nbsp;Lorenz Linhardt,&nbsp;Marcel Schubert\",\"doi\":\"10.1007/s10506-024-09389-8\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Judges in multiple US states, such as New York, Pennsylvania, Wisconsin, California, and Florida, receive a prediction of defendants’ recidivism risk, generated by the COMPAS algorithm. If judges act on these predictions, they implicitly delegate normative decisions to proprietary software, even beyond the previously documented race and age biases. Using the ProPublica dataset, we demonstrate that COMPAS predictions favor jailing over release. COMPAS is biased against defendants. We show that this bias can largely be removed. Our proposed correction increases overall accuracy, and attenuates anti-black and anti-young bias. However, it also slightly increases the risk that defendants are released who commit a new crime before tried. We argue that this normative decision should not be buried in the code. The tradeoff between the interests of innocent defendants and of future victims should not only be made transparent. The algorithm should be changed such that the legislator and the courts do make this choice.</p></div>\",\"PeriodicalId\":51336,\"journal\":{\"name\":\"Artificial Intelligence and Law\",\"volume\":\"33 2\",\"pages\":\"383 - 404\"},\"PeriodicalIF\":3.1000,\"publicationDate\":\"2024-02-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://link.springer.com/content/pdf/10.1007/s10506-024-09389-8.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Artificial Intelligence and Law\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://link.springer.com/article/10.1007/s10506-024-09389-8\",\"RegionNum\":2,\"RegionCategory\":\"社会学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Artificial Intelligence and Law","FirstCategoryId":"94","ListUrlMain":"https://link.springer.com/article/10.1007/s10506-024-09389-8","RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

美国多个州的法官,如纽约州、宾夕法尼亚州、威斯康星州、加利福尼亚州和佛罗里达州,都会收到由COMPAS算法生成的被告再犯风险预测。如果法官根据这些预测行事,他们就隐含地将规范性决策委托给了专有软件,甚至超出了先前记录的种族和年龄偏见。使用ProPublica数据集,我们证明了COMPAS的预测倾向于监禁而不是释放。COMPAS对被告有偏见。我们表明,这种偏见在很大程度上是可以消除的。我们提出的修正提高了整体准确性,并减弱了反黑人和反年轻人的偏见。然而,它也略微增加了被告在审判前犯了新罪而被释放的风险。我们认为这个规范性的决定不应该隐藏在代码中。无辜被告和未来受害者的利益之间的权衡不仅应该是透明的。算法应该改变,让立法者和法院做出选择。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Code is law: how COMPAS affects the way the judiciary handles the risk of recidivism

Judges in multiple US states, such as New York, Pennsylvania, Wisconsin, California, and Florida, receive a prediction of defendants’ recidivism risk, generated by the COMPAS algorithm. If judges act on these predictions, they implicitly delegate normative decisions to proprietary software, even beyond the previously documented race and age biases. Using the ProPublica dataset, we demonstrate that COMPAS predictions favor jailing over release. COMPAS is biased against defendants. We show that this bias can largely be removed. Our proposed correction increases overall accuracy, and attenuates anti-black and anti-young bias. However, it also slightly increases the risk that defendants are released who commit a new crime before tried. We argue that this normative decision should not be buried in the code. The tradeoff between the interests of innocent defendants and of future victims should not only be made transparent. The algorithm should be changed such that the legislator and the courts do make this choice.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
9.50
自引率
26.80%
发文量
33
期刊介绍: Artificial Intelligence and Law is an international forum for the dissemination of original interdisciplinary research in the following areas: Theoretical or empirical studies in artificial intelligence (AI), cognitive psychology, jurisprudence, linguistics, or philosophy which address the development of formal or computational models of legal knowledge, reasoning, and decision making. In-depth studies of innovative artificial intelligence systems that are being used in the legal domain. Studies which address the legal, ethical and social implications of the field of Artificial Intelligence and Law. Topics of interest include, but are not limited to, the following: Computational models of legal reasoning and decision making; judgmental reasoning, adversarial reasoning, case-based reasoning, deontic reasoning, and normative reasoning. Formal representation of legal knowledge: deontic notions, normative modalities, rights, factors, values, rules. Jurisprudential theories of legal reasoning. Specialized logics for law. Psychological and linguistic studies concerning legal reasoning. Legal expert systems; statutory systems, legal practice systems, predictive systems, and normative systems. AI and law support for legislative drafting, judicial decision-making, and public administration. Intelligent processing of legal documents; conceptual retrieval of cases and statutes, automatic text understanding, intelligent document assembly systems, hypertext, and semantic markup of legal documents. Intelligent processing of legal information on the World Wide Web, legal ontologies, automated intelligent legal agents, electronic legal institutions, computational models of legal texts. Ramifications for AI and Law in e-Commerce, automatic contracting and negotiation, digital rights management, and automated dispute resolution. Ramifications for AI and Law in e-governance, e-government, e-Democracy, and knowledge-based systems supporting public services, public dialogue and mediation. Intelligent computer-assisted instructional systems in law or ethics. Evaluation and auditing techniques for legal AI systems. Systemic problems in the construction and delivery of legal AI systems. Impact of AI on the law and legal institutions. Ethical issues concerning legal AI systems. In addition to original research contributions, the Journal will include a Book Review section, a series of Technology Reports describing existing and emerging products, applications and technologies, and a Research Notes section of occasional essays posing interesting and timely research challenges for the field of Artificial Intelligence and Law. Financial support for the Journal of Artificial Intelligence and Law is provided by the University of Pittsburgh School of Law.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信