Algorithmic Stereotypes: Implications for Fairness of Generalizing from Past Data

D. McNamara
{"title":"Algorithmic Stereotypes: Implications for Fairness of Generalizing from Past Data","authors":"D. McNamara","doi":"10.1145/3306618.3314312","DOIUrl":null,"url":null,"abstract":"Background Algorithms are used to make or support decisions about people in a wide variety of contexts including the provision of financial credit, judicial risk assessments, applicant screening for employment, and online ad selection. Such algorithms often make predictions about the future behavior of individuals by generalizing from data recording the past behaviors of other individuals. Concerns have arisen about the fairness of these algorithms. Researchers have responded by developing definitions of fairness and algorithm designs that incorporate these definitions [2]. A common theme is the avoidance of discrimination on the basis of group membership, such as race or gender. This may be more complex than simply excluding the explicit consideration of an individual’s group membership, because other characteristics may be correlated with this group membership – a phenomenon known as redundant encoding [5]. Different definitions of fairness may be invoked by different stakeholders. The controversy associated with the COMPAS recidivism prediction system used in some parts of the United States showed this in practice. News organization ProPublica critiqued the system as unfair since among non-reoffenders, African-Americans were more likely to be marked high risk than whites, while among re-offenders, whites were more likely to be marked low risk than African-Americans [1]. COMPAS owner Equivant (formerly Northpointe) argued that the algorithm was not unfair since among those marked high risk, African-Americans were no less likely to reoffend than whites [4].","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3306618.3314312","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Background Algorithms are used to make or support decisions about people in a wide variety of contexts including the provision of financial credit, judicial risk assessments, applicant screening for employment, and online ad selection. Such algorithms often make predictions about the future behavior of individuals by generalizing from data recording the past behaviors of other individuals. Concerns have arisen about the fairness of these algorithms. Researchers have responded by developing definitions of fairness and algorithm designs that incorporate these definitions [2]. A common theme is the avoidance of discrimination on the basis of group membership, such as race or gender. This may be more complex than simply excluding the explicit consideration of an individual’s group membership, because other characteristics may be correlated with this group membership – a phenomenon known as redundant encoding [5]. Different definitions of fairness may be invoked by different stakeholders. The controversy associated with the COMPAS recidivism prediction system used in some parts of the United States showed this in practice. News organization ProPublica critiqued the system as unfair since among non-reoffenders, African-Americans were more likely to be marked high risk than whites, while among re-offenders, whites were more likely to be marked low risk than African-Americans [1]. COMPAS owner Equivant (formerly Northpointe) argued that the algorithm was not unfair since among those marked high risk, African-Americans were no less likely to reoffend than whites [4].
算法刻板印象:从过去数据中概括公平性的含义
算法用于在各种情况下做出或支持关于人的决策,包括提供金融信贷、司法风险评估、求职者筛选和在线广告选择。这种算法通常通过对记录其他个体过去行为的数据进行归纳,来预测个体未来的行为。人们开始担心这些算法的公平性。研究人员对此做出了回应,他们开发了公平的定义,并设计了包含这些定义的算法。一个共同的主题是避免基于群体成员的歧视,例如种族或性别。这可能比简单地排除对个人群体成员身份的明确考虑要复杂得多,因为其他特征可能与这个群体成员身份相关——一种被称为冗余编码[5]的现象。不同的利益相关者可能会援引不同的公平定义。美国部分地区使用的COMPAS累犯预测系统引发的争议在实践中证明了这一点。新闻机构ProPublica批评这一制度不公平,因为在非再犯中,非洲裔美国人比白人更容易被标记为高风险,而在再犯中,白人比非洲裔美国人更容易被标记为低风险。COMPAS的所有者Equivant(原Northpointe)辩称,该算法并非不公平,因为在那些被标记为高风险的人中,非洲裔美国人再次犯罪的可能性并不低于白人。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信