群体内偏见和平衡数据的作用:人类和机器再犯风险预测的比较

Arpita Biswas, M. Kołczyńska, Saana Rantanen, Polina Rozenshtein
{"title":"群体内偏见和平衡数据的作用:人类和机器再犯风险预测的比较","authors":"Arpita Biswas, M. Kołczyńska, Saana Rantanen, Polina Rozenshtein","doi":"10.1145/3378393.3402507","DOIUrl":null,"url":null,"abstract":"Fairness and bias in automated decision-making gain importance as the prevalence of algorithms increases in different areas of social life. This paper contributes to the discussion of algorithmic fairness with a crowdsourced vignette survey on recidivism risk assessment, which we compare to previous studies on this topic and to predictions of an automated recidivism risk tool. We use the case of the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) and the Broward County dataset of pre-trial defendants as a data source and for purposes of comparability with the earlier analysis. In our survey, each respondent assessed recidivism risk for a set of vignettes describing real defendants, where each set was balanced with regard to the defendants' race and re-offender status. The survey ensured a 50: 50 ratio of black and white respondents. We found that predictions in our survey---while less accurate---were considerably more fair in terms of equalized odds than previous surveys. We attribute it to the differences in survey design: using the balanced set of vignettes and not providing feedback after responding to each vignette. We also analyzed the performance and fairness of predictions by race of respondent and defendant. We found that both white and black respondents tend to favor defendants of their own race, but the magnitude of the effect is relatively small. In addition to the survey, we train two statistical models, one trained with balanced data and other with unbalanced data. We observe that the model trained on balanced data is substantially more fair and possess less in-group bias.","PeriodicalId":176951,"journal":{"name":"Proceedings of the 3rd ACM SIGCAS Conference on Computing and Sustainable Societies","volume":"62 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":"{\"title\":\"The Role of In-Group Bias and Balanced Data: A Comparison of Human and Machine Recidivism Risk Predictions\",\"authors\":\"Arpita Biswas, M. Kołczyńska, Saana Rantanen, Polina Rozenshtein\",\"doi\":\"10.1145/3378393.3402507\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Fairness and bias in automated decision-making gain importance as the prevalence of algorithms increases in different areas of social life. This paper contributes to the discussion of algorithmic fairness with a crowdsourced vignette survey on recidivism risk assessment, which we compare to previous studies on this topic and to predictions of an automated recidivism risk tool. We use the case of the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) and the Broward County dataset of pre-trial defendants as a data source and for purposes of comparability with the earlier analysis. In our survey, each respondent assessed recidivism risk for a set of vignettes describing real defendants, where each set was balanced with regard to the defendants' race and re-offender status. The survey ensured a 50: 50 ratio of black and white respondents. We found that predictions in our survey---while less accurate---were considerably more fair in terms of equalized odds than previous surveys. We attribute it to the differences in survey design: using the balanced set of vignettes and not providing feedback after responding to each vignette. We also analyzed the performance and fairness of predictions by race of respondent and defendant. We found that both white and black respondents tend to favor defendants of their own race, but the magnitude of the effect is relatively small. In addition to the survey, we train two statistical models, one trained with balanced data and other with unbalanced data. We observe that the model trained on balanced data is substantially more fair and possess less in-group bias.\",\"PeriodicalId\":176951,\"journal\":{\"name\":\"Proceedings of the 3rd ACM SIGCAS Conference on Computing and Sustainable Societies\",\"volume\":\"62 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-06-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"8\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 3rd ACM SIGCAS Conference on Computing and Sustainable Societies\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3378393.3402507\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 3rd ACM SIGCAS Conference on Computing and Sustainable Societies","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3378393.3402507","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8

摘要

随着算法在社会生活不同领域的普及,自动决策中的公平性和偏见变得越来越重要。本文通过对累犯风险评估的众包调查来讨论算法公平性,并将其与先前关于该主题的研究和自动累犯风险工具的预测进行了比较。为了与之前的分析比较,我们使用了惩教罪犯管理分析替代制裁(COMPAS)案例和布劳沃德县审前被告数据集作为数据来源。在我们的调查中,每个受访者评估了一组描述真实被告的小插曲的再犯风险,其中每一组都是根据被告的种族和再犯身份进行平衡的。该调查确保了黑人和白人受访者的比例为50:50。我们发现,我们调查中的预测虽然不那么准确,但就平均赔率而言,比以前的调查要公平得多。我们将其归因于调查设计的差异:使用平衡的小插曲集,而不是在回应每个小插曲后提供反馈。我们还根据被告和被告的种族分析了预测的效果和公平性。我们发现,白人和黑人受访者都倾向于支持自己种族的被告,但影响的幅度相对较小。除了调查之外,我们还训练了两个统计模型,一个是用平衡数据训练的,另一个是用不平衡数据训练的。我们观察到,在平衡数据上训练的模型实质上更公平,并且具有更少的群体内偏见。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
The Role of In-Group Bias and Balanced Data: A Comparison of Human and Machine Recidivism Risk Predictions
Fairness and bias in automated decision-making gain importance as the prevalence of algorithms increases in different areas of social life. This paper contributes to the discussion of algorithmic fairness with a crowdsourced vignette survey on recidivism risk assessment, which we compare to previous studies on this topic and to predictions of an automated recidivism risk tool. We use the case of the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) and the Broward County dataset of pre-trial defendants as a data source and for purposes of comparability with the earlier analysis. In our survey, each respondent assessed recidivism risk for a set of vignettes describing real defendants, where each set was balanced with regard to the defendants' race and re-offender status. The survey ensured a 50: 50 ratio of black and white respondents. We found that predictions in our survey---while less accurate---were considerably more fair in terms of equalized odds than previous surveys. We attribute it to the differences in survey design: using the balanced set of vignettes and not providing feedback after responding to each vignette. We also analyzed the performance and fairness of predictions by race of respondent and defendant. We found that both white and black respondents tend to favor defendants of their own race, but the magnitude of the effect is relatively small. In addition to the survey, we train two statistical models, one trained with balanced data and other with unbalanced data. We observe that the model trained on balanced data is substantially more fair and possess less in-group bias.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信