算法决策中的主动公平性

Alejandro Noriega-Campero, Michiel A. Bakker, Bernardo Garcia-Bulle, A. Pentland
{"title":"算法决策中的主动公平性","authors":"Alejandro Noriega-Campero, Michiel A. Bakker, Bernardo Garcia-Bulle, A. Pentland","doi":"10.1145/3306618.3314277","DOIUrl":null,"url":null,"abstract":"Society increasingly relies on machine learning models for automated decision making. Yet, efficiency gains from automation have come paired with concern for algorithmic discrimination that can systematize inequality. Recent work has proposed optimal post-processing methods that randomize classification decisions for a fraction of individuals, in order to achieve fairness measures related to parity in errors and calibration. These methods, however, have raised concern due to the information inefficiency, intra-group unfairness, and Pareto sub-optimality they entail. The present work proposes an alternativeactive framework for fair classification, where, in deployment, a decision-maker adaptively acquires information according to the needs of different groups or individuals, towards balancing disparities in classification performance. We propose two such methods, where information collection is adapted to group- and individual-level needs respectively. We show on real-world datasets that these can achieve: 1) calibration and single error parity (e.g.,equal opportunity ); and 2) parity in both false positive and false negative rates (i.e.,equal odds ). Moreover, we show that by leveraging their additional degree of freedom,active approaches can substantially outperform randomization-based classifiers previously considered optimal, while avoiding limitations such as intra-group unfairness.","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"45 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"70","resultStr":"{\"title\":\"Active Fairness in Algorithmic Decision Making\",\"authors\":\"Alejandro Noriega-Campero, Michiel A. Bakker, Bernardo Garcia-Bulle, A. Pentland\",\"doi\":\"10.1145/3306618.3314277\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Society increasingly relies on machine learning models for automated decision making. Yet, efficiency gains from automation have come paired with concern for algorithmic discrimination that can systematize inequality. Recent work has proposed optimal post-processing methods that randomize classification decisions for a fraction of individuals, in order to achieve fairness measures related to parity in errors and calibration. These methods, however, have raised concern due to the information inefficiency, intra-group unfairness, and Pareto sub-optimality they entail. The present work proposes an alternativeactive framework for fair classification, where, in deployment, a decision-maker adaptively acquires information according to the needs of different groups or individuals, towards balancing disparities in classification performance. We propose two such methods, where information collection is adapted to group- and individual-level needs respectively. We show on real-world datasets that these can achieve: 1) calibration and single error parity (e.g.,equal opportunity ); and 2) parity in both false positive and false negative rates (i.e.,equal odds ). Moreover, we show that by leveraging their additional degree of freedom,active approaches can substantially outperform randomization-based classifiers previously considered optimal, while avoiding limitations such as intra-group unfairness.\",\"PeriodicalId\":418125,\"journal\":{\"name\":\"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society\",\"volume\":\"45 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-09-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"70\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3306618.3314277\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3306618.3314277","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 70

摘要

社会越来越依赖机器学习模型来进行自动化决策。然而,自动化带来的效率提升伴随着对算法歧视的担忧,这种歧视可能使不平等系统化。最近的工作提出了最优的后处理方法,随机化一小部分个体的分类决策,以实现与误差和校准奇偶性相关的公平措施。然而,由于这些方法所带来的信息效率低下、群体内不公平和帕累托次最优性,这些方法引起了人们的关注。本研究提出了一种替代的主动公平分类框架,在部署中,决策者根据不同群体或个人的需求自适应地获取信息,以平衡分类性能的差异。我们提出了两种这样的方法,其中信息收集分别适应群体和个人层面的需求。我们在真实世界的数据集上展示了这些方法可以实现:1)校准和单错误奇偶校验(例如,机会均等);2)假阳性和假阴性率相等(即,相等的几率)。此外,我们表明,通过利用其额外的自由度,主动方法可以大大优于先前被认为是最优的基于随机化的分类器,同时避免了诸如组内不公平之类的限制。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Active Fairness in Algorithmic Decision Making
Society increasingly relies on machine learning models for automated decision making. Yet, efficiency gains from automation have come paired with concern for algorithmic discrimination that can systematize inequality. Recent work has proposed optimal post-processing methods that randomize classification decisions for a fraction of individuals, in order to achieve fairness measures related to parity in errors and calibration. These methods, however, have raised concern due to the information inefficiency, intra-group unfairness, and Pareto sub-optimality they entail. The present work proposes an alternativeactive framework for fair classification, where, in deployment, a decision-maker adaptively acquires information according to the needs of different groups or individuals, towards balancing disparities in classification performance. We propose two such methods, where information collection is adapted to group- and individual-level needs respectively. We show on real-world datasets that these can achieve: 1) calibration and single error parity (e.g.,equal opportunity ); and 2) parity in both false positive and false negative rates (i.e.,equal odds ). Moreover, we show that by leveraging their additional degree of freedom,active approaches can substantially outperform randomization-based classifiers previously considered optimal, while avoiding limitations such as intra-group unfairness.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信