Machine Learning Equity and Accuracy in an Applied Justice Setting

J. Russell
{"title":"Machine Learning Equity and Accuracy in an Applied Justice Setting","authors":"J. Russell","doi":"10.1109/SMARTCOMP52413.2021.00050","DOIUrl":null,"url":null,"abstract":"There has been a growing awareness of bias in machine learning and a proliferation of different notions of fairness. While formal definitions of fairness outline different ways fairness might be computed, some notions of fairness do not provide guidance on implementation of machine learning in practice. In juvenile justice settings in particular, computational solutions to fairness often lead to ethical quandaries. Achieving algorithmic fairness in a setting that has long roots in structural racism, with data that reflects those in-equalities, may not be possible. And with different racial groups experiencing different rates of key outcomes (like a new disposition) at markedly different rates, it is difficult for any machine learning model to produce similar accuracy, false positive rates, and false negative rates. These ideas are tested with data from a large, urban county in the Midwest United States to examine how different models and different cutoffs combine to show the possibilities and limits of achieving machine learning fairness in an applied justice setting.","PeriodicalId":330785,"journal":{"name":"2021 IEEE International Conference on Smart Computing (SMARTCOMP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE International Conference on Smart Computing (SMARTCOMP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SMARTCOMP52413.2021.00050","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

There has been a growing awareness of bias in machine learning and a proliferation of different notions of fairness. While formal definitions of fairness outline different ways fairness might be computed, some notions of fairness do not provide guidance on implementation of machine learning in practice. In juvenile justice settings in particular, computational solutions to fairness often lead to ethical quandaries. Achieving algorithmic fairness in a setting that has long roots in structural racism, with data that reflects those in-equalities, may not be possible. And with different racial groups experiencing different rates of key outcomes (like a new disposition) at markedly different rates, it is difficult for any machine learning model to produce similar accuracy, false positive rates, and false negative rates. These ideas are tested with data from a large, urban county in the Midwest United States to examine how different models and different cutoffs combine to show the possibilities and limits of achieving machine learning fairness in an applied justice setting.
应用司法环境下的机器学习公平性和准确性
人们越来越意识到机器学习中的偏见,不同的公平概念也在激增。虽然公平的正式定义概述了计算公平的不同方式,但一些公平的概念并没有为机器学习在实践中的实现提供指导。特别是在青少年司法环境中,公平的计算解决方案常常导致道德困境。在一个长期植根于结构性种族主义、数据反映不平等的环境中,实现算法公平可能是不可能的。由于不同的种族群体经历不同的关键结果(比如新的性格)的比率明显不同,任何机器学习模型都很难产生相似的准确性、假阳性率和假阴性率。这些想法用来自美国中西部一个大型城市县的数据进行了测试,以检查不同的模型和不同的截止值如何结合起来,以显示在应用司法环境中实现机器学习公平性的可能性和局限性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信