算法设计:公平性与准确性

Annie Liang, Jay Lu, Xiaosheng Mu
{"title":"算法设计:公平性与准确性","authors":"Annie Liang, Jay Lu, Xiaosheng Mu","doi":"10.1145/3490486.3538237","DOIUrl":null,"url":null,"abstract":"Algorithms are increasingly used to guide consequential decisions, such as who should be granted bail or be approved for a loan. Motivated by growing empirical evidence, regulators are concerned about the possibility that the errors of these algorithms differ sharply across subgroups of the population. What are the tradeoffs between accuracy and fairness, and how do these tradeoffs depend on the inputs to the algorithm? We propose a model in which a designer chooses an algorithm that maps observed inputs into decisions, and introduce a fairness-accuracy Pareto frontier. We identify how the algorithm's inputs govern the shape of this frontier, showing (for example) that access to group identity reduces the error for the worse-off group everywhere along the frontier. We then apply these results to study an \"input-design\" problem where the designer controls the algorithm's inputs (for example, by legally banning an input), but the algorithm itself is chosen by another agent. We show that: (1) all designers strictly prefer to allow group identity if and only if the algorithm's other inputs satisfy a condition we call group-balance; (2) all designers strictly prefer to allow any input (including potentially biased inputs such as test scores) so long as group identity is permitted as an input, but may prefer to ban it when group identity is not.","PeriodicalId":209859,"journal":{"name":"Proceedings of the 23rd ACM Conference on Economics and Computation","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2022-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"13","resultStr":"{\"title\":\"Algorithmic Design: Fairness Versus Accuracy\",\"authors\":\"Annie Liang, Jay Lu, Xiaosheng Mu\",\"doi\":\"10.1145/3490486.3538237\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Algorithms are increasingly used to guide consequential decisions, such as who should be granted bail or be approved for a loan. Motivated by growing empirical evidence, regulators are concerned about the possibility that the errors of these algorithms differ sharply across subgroups of the population. What are the tradeoffs between accuracy and fairness, and how do these tradeoffs depend on the inputs to the algorithm? We propose a model in which a designer chooses an algorithm that maps observed inputs into decisions, and introduce a fairness-accuracy Pareto frontier. We identify how the algorithm's inputs govern the shape of this frontier, showing (for example) that access to group identity reduces the error for the worse-off group everywhere along the frontier. We then apply these results to study an \\\"input-design\\\" problem where the designer controls the algorithm's inputs (for example, by legally banning an input), but the algorithm itself is chosen by another agent. We show that: (1) all designers strictly prefer to allow group identity if and only if the algorithm's other inputs satisfy a condition we call group-balance; (2) all designers strictly prefer to allow any input (including potentially biased inputs such as test scores) so long as group identity is permitted as an input, but may prefer to ban it when group identity is not.\",\"PeriodicalId\":209859,\"journal\":{\"name\":\"Proceedings of the 23rd ACM Conference on Economics and Computation\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-07-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"13\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 23rd ACM Conference on Economics and Computation\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3490486.3538237\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 23rd ACM Conference on Economics and Computation","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3490486.3538237","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 13

摘要

算法越来越多地用于指导相应的决策,例如谁应该获得保释或批准贷款。在越来越多的经验证据的推动下,监管机构担心,这些算法在不同人群中的误差可能会存在巨大差异。准确性和公平性之间的权衡是什么,这些权衡是如何取决于算法的输入的?我们提出了一个模型,在这个模型中,设计者选择一种算法,将观察到的输入映射到决策中,并引入公平-准确性帕累托边界。我们确定了算法的输入是如何控制这个边界的形状的,例如,显示了对群体身份的访问减少了边界上任何地方情况较差的群体的错误。然后,我们将这些结果应用于研究“输入-设计”问题,其中设计者控制算法的输入(例如,通过法律禁止输入),但算法本身由另一个代理选择。我们证明:(1)当且仅当算法的其他输入满足我们称为群体平衡的条件时,所有设计者都严格倾向于允许群体身份;(2)只要允许群体身份作为输入,所有设计师都严格倾向于允许任何输入(包括可能有偏见的输入,如考试分数),但当群体身份不允许输入时,可能更倾向于禁止输入。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Algorithmic Design: Fairness Versus Accuracy
Algorithms are increasingly used to guide consequential decisions, such as who should be granted bail or be approved for a loan. Motivated by growing empirical evidence, regulators are concerned about the possibility that the errors of these algorithms differ sharply across subgroups of the population. What are the tradeoffs between accuracy and fairness, and how do these tradeoffs depend on the inputs to the algorithm? We propose a model in which a designer chooses an algorithm that maps observed inputs into decisions, and introduce a fairness-accuracy Pareto frontier. We identify how the algorithm's inputs govern the shape of this frontier, showing (for example) that access to group identity reduces the error for the worse-off group everywhere along the frontier. We then apply these results to study an "input-design" problem where the designer controls the algorithm's inputs (for example, by legally banning an input), but the algorithm itself is chosen by another agent. We show that: (1) all designers strictly prefer to allow group identity if and only if the algorithm's other inputs satisfy a condition we call group-balance; (2) all designers strictly prefer to allow any input (including potentially biased inputs such as test scores) so long as group identity is permitted as an input, but may prefer to ban it when group identity is not.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信