{"title":"用算法衡量种族歧视","authors":"David Arnold, Will Dobbie, Peter Hull","doi":"10.2139/ssrn.3753043","DOIUrl":null,"url":null,"abstract":"Algorithmic decision-making can lead to discrimination against legally protected groups, but measuring such discrimination is often hampered by a fundamental selection challenge. We develop new quasi-experimental tools to overcome this challenge and measure algorithmic discrimination in pretrial bail decisions. We show that the selection challenge reduces to the challenge of measuring four moments, which can be estimated by extrapolating quasi-experimental variation across as-good-as-randomly assigned decision-makers. Estimates from New York City show that both a sophisticated machine learning algorithm and a simpler regression model discriminate against Black defendants even though defendant race and ethnicity are not included in the training data.","PeriodicalId":166384,"journal":{"name":"PSN: Politics of Race (Topic)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"30","resultStr":"{\"title\":\"Measuring Racial Discrimination in Algorithms\",\"authors\":\"David Arnold, Will Dobbie, Peter Hull\",\"doi\":\"10.2139/ssrn.3753043\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Algorithmic decision-making can lead to discrimination against legally protected groups, but measuring such discrimination is often hampered by a fundamental selection challenge. We develop new quasi-experimental tools to overcome this challenge and measure algorithmic discrimination in pretrial bail decisions. We show that the selection challenge reduces to the challenge of measuring four moments, which can be estimated by extrapolating quasi-experimental variation across as-good-as-randomly assigned decision-makers. Estimates from New York City show that both a sophisticated machine learning algorithm and a simpler regression model discriminate against Black defendants even though defendant race and ethnicity are not included in the training data.\",\"PeriodicalId\":166384,\"journal\":{\"name\":\"PSN: Politics of Race (Topic)\",\"volume\":\"33 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"30\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"PSN: Politics of Race (Topic)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.2139/ssrn.3753043\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"PSN: Politics of Race (Topic)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2139/ssrn.3753043","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Algorithmic decision-making can lead to discrimination against legally protected groups, but measuring such discrimination is often hampered by a fundamental selection challenge. We develop new quasi-experimental tools to overcome this challenge and measure algorithmic discrimination in pretrial bail decisions. We show that the selection challenge reduces to the challenge of measuring four moments, which can be estimated by extrapolating quasi-experimental variation across as-good-as-randomly assigned decision-makers. Estimates from New York City show that both a sophisticated machine learning algorithm and a simpler regression model discriminate against Black defendants even though defendant race and ethnicity are not included in the training data.