'Un'Fair Machine Learning Algorithms

Runshan Fu, Manmohan Aseri, Param Vir Singh, K. Srinivasan
{"title":"'Un'Fair Machine Learning Algorithms","authors":"Runshan Fu, Manmohan Aseri, Param Vir Singh, K. Srinivasan","doi":"10.2139/ssrn.3408275","DOIUrl":null,"url":null,"abstract":"Ensuring fairness in algorithmic decision making is a crucial policy issue. Current legislation ensures fairness by barring algorithm designers from using demographic information in their decision making. As a result, to be legally compliant, the algorithms need to ensure equal treatment. However, in many cases, ensuring equal treatment leads to disparate impact particularly when there are differences among groups based on demographic classes. In response, several “fair” machine learning (ML) algorithms that require impact parity (e.g., equal opportunity) at the cost of equal treatment have recently been proposed to adjust for the societal inequalities. Advocates of fair ML propose changing the law to allow the use of protected class-specific decision rules. We show that the proposed fair ML algorithms that require impact parity, while conceptually appealing, can make everyone worse off, including the very class they aim to protect. Compared with the current law, which requires treatment parity, the fair ML algorithms, which require impact parity, limit the benefits of a more accurate algorithm for a firm. As a result, profit maximizing firms could underinvest in learning, that is, improving the accuracy of their machine learning algorithms. We show that the investment in learning decreases when misclassification is costly, which is exactly the case when greater accuracy is otherwise desired. Our paper highlights the importance of considering strategic behavior of stake holders when developing and evaluating fair ML algorithms. Overall, our results indicate that fair ML algorithms that require impact parity, if turned into law, may not be able to deliver some of the anticipated benefits. This paper was accepted by Kartik Hosanagar, information systems.","PeriodicalId":288317,"journal":{"name":"International Political Economy: Globalization eJournal","volume":"671 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"28","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Political Economy: Globalization eJournal","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2139/ssrn.3408275","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 28

Abstract

Ensuring fairness in algorithmic decision making is a crucial policy issue. Current legislation ensures fairness by barring algorithm designers from using demographic information in their decision making. As a result, to be legally compliant, the algorithms need to ensure equal treatment. However, in many cases, ensuring equal treatment leads to disparate impact particularly when there are differences among groups based on demographic classes. In response, several “fair” machine learning (ML) algorithms that require impact parity (e.g., equal opportunity) at the cost of equal treatment have recently been proposed to adjust for the societal inequalities. Advocates of fair ML propose changing the law to allow the use of protected class-specific decision rules. We show that the proposed fair ML algorithms that require impact parity, while conceptually appealing, can make everyone worse off, including the very class they aim to protect. Compared with the current law, which requires treatment parity, the fair ML algorithms, which require impact parity, limit the benefits of a more accurate algorithm for a firm. As a result, profit maximizing firms could underinvest in learning, that is, improving the accuracy of their machine learning algorithms. We show that the investment in learning decreases when misclassification is costly, which is exactly the case when greater accuracy is otherwise desired. Our paper highlights the importance of considering strategic behavior of stake holders when developing and evaluating fair ML algorithms. Overall, our results indicate that fair ML algorithms that require impact parity, if turned into law, may not be able to deliver some of the anticipated benefits. This paper was accepted by Kartik Hosanagar, information systems.
“不公平”的机器学习算法
确保算法决策的公平性是一个关键的政策问题。目前的立法通过禁止算法设计者在决策中使用人口统计信息来确保公平。因此,为了符合法律规定,算法需要确保平等对待。然而,在许多情况下,确保平等待遇会导致不同的影响,特别是在不同人口阶层的群体之间存在差异的情况下。作为回应,最近有人提出了一些“公平”的机器学习(ML)算法,这些算法要求以平等待遇为代价实现影响均等(例如,平等机会),以调整社会不平等。公平机器学习的倡导者建议修改法律,允许使用受保护的特定类别的决策规则。我们表明,提出的公平ML算法需要影响奇偶性,虽然在概念上很吸引人,但可能会使每个人都变得更糟,包括他们旨在保护的阶级。与目前要求待遇对等的法律相比,公平的ML算法要求影响对等,限制了更准确算法对公司的好处。因此,追求利润最大化的公司可能会在学习上投资不足,也就是说,提高机器学习算法的准确性。我们表明,当错误分类代价高昂时,学习的投资就会减少,这正是期望更高准确率的情况。我们的论文强调了在开发和评估公平的ML算法时考虑利益相关者的战略行为的重要性。总的来说,我们的结果表明,需要影响奇偶性的公平ML算法,如果变成法律,可能无法提供一些预期的好处。这篇论文被Kartik Hosanagar,信息系统所接受。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信