{"title":"Machine Learning Equity and Accuracy in an Applied Justice Setting","authors":"J. Russell","doi":"10.1109/SMARTCOMP52413.2021.00050","DOIUrl":null,"url":null,"abstract":"There has been a growing awareness of bias in machine learning and a proliferation of different notions of fairness. While formal definitions of fairness outline different ways fairness might be computed, some notions of fairness do not provide guidance on implementation of machine learning in practice. In juvenile justice settings in particular, computational solutions to fairness often lead to ethical quandaries. Achieving algorithmic fairness in a setting that has long roots in structural racism, with data that reflects those in-equalities, may not be possible. And with different racial groups experiencing different rates of key outcomes (like a new disposition) at markedly different rates, it is difficult for any machine learning model to produce similar accuracy, false positive rates, and false negative rates. These ideas are tested with data from a large, urban county in the Midwest United States to examine how different models and different cutoffs combine to show the possibilities and limits of achieving machine learning fairness in an applied justice setting.","PeriodicalId":330785,"journal":{"name":"2021 IEEE International Conference on Smart Computing (SMARTCOMP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE International Conference on Smart Computing (SMARTCOMP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SMARTCOMP52413.2021.00050","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
There has been a growing awareness of bias in machine learning and a proliferation of different notions of fairness. While formal definitions of fairness outline different ways fairness might be computed, some notions of fairness do not provide guidance on implementation of machine learning in practice. In juvenile justice settings in particular, computational solutions to fairness often lead to ethical quandaries. Achieving algorithmic fairness in a setting that has long roots in structural racism, with data that reflects those in-equalities, may not be possible. And with different racial groups experiencing different rates of key outcomes (like a new disposition) at markedly different rates, it is difficult for any machine learning model to produce similar accuracy, false positive rates, and false negative rates. These ideas are tested with data from a large, urban county in the Midwest United States to examine how different models and different cutoffs combine to show the possibilities and limits of achieving machine learning fairness in an applied justice setting.