通过模型集成提高预测公平性

Dheeraj Bhaskaruni, Hui Hu, Chao Lan
{"title":"通过模型集成提高预测公平性","authors":"Dheeraj Bhaskaruni, Hui Hu, Chao Lan","doi":"10.1109/ICTAI.2019.00273","DOIUrl":null,"url":null,"abstract":"Fair machine learning is a topical problem. It studies how to mitigate unethical bias against minority people in model prediction. A promising solution is ensemble learning - Nina et al [1] first argue that one can obtain a fair model by bagging a set of standard models. However, they do not present any empirical evidence or discuss effective ensemble strategy for fair learning. In this paper, we propose a new ensemble strategy for fair learning. It adopts the AdaBoost framework, but unlike AdaBoost that upweights mispredicted instances, it upweights unfairly predicted instances which we identify using a variant of Luong's k-NN based situation testing method [2]. Through experiments on two real-world data sets, we show our proposed strategy achieves higher fairness than the bagging strategy discussed by Nina et al and several baseline methods. Our results also suggest standard ensemble strategies may not be sufficient for improving fairness.","PeriodicalId":346657,"journal":{"name":"2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"16","resultStr":"{\"title\":\"Improving Prediction Fairness via Model Ensemble\",\"authors\":\"Dheeraj Bhaskaruni, Hui Hu, Chao Lan\",\"doi\":\"10.1109/ICTAI.2019.00273\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Fair machine learning is a topical problem. It studies how to mitigate unethical bias against minority people in model prediction. A promising solution is ensemble learning - Nina et al [1] first argue that one can obtain a fair model by bagging a set of standard models. However, they do not present any empirical evidence or discuss effective ensemble strategy for fair learning. In this paper, we propose a new ensemble strategy for fair learning. It adopts the AdaBoost framework, but unlike AdaBoost that upweights mispredicted instances, it upweights unfairly predicted instances which we identify using a variant of Luong's k-NN based situation testing method [2]. Through experiments on two real-world data sets, we show our proposed strategy achieves higher fairness than the bagging strategy discussed by Nina et al and several baseline methods. Our results also suggest standard ensemble strategies may not be sufficient for improving fairness.\",\"PeriodicalId\":346657,\"journal\":{\"name\":\"2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI)\",\"volume\":\"3 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"16\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICTAI.2019.00273\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICTAI.2019.00273","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 16

摘要

公平的机器学习是一个热门问题。它研究了如何在模型预测中减少对少数民族的不道德偏见。一个很有前途的解决方案是集成学习——Nina等人[1]首先认为,可以通过将一组标准模型套袋来获得一个公平的模型。然而,他们没有提出任何经验证据或讨论有效的集成策略公平学习。在本文中,我们提出了一种新的集成策略来实现公平学习。它采用AdaBoost框架,但与AdaBoost提升错误预测实例的权重不同,它提升了我们使用Luong基于k-NN的情境测试方法的变体识别的不公平预测实例的权重[2]。通过对两个真实数据集的实验,我们表明我们提出的策略比Nina等人讨论的bagging策略和几种基线方法具有更高的公平性。我们的研究结果还表明,标准的集成策略可能不足以提高公平性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Improving Prediction Fairness via Model Ensemble
Fair machine learning is a topical problem. It studies how to mitigate unethical bias against minority people in model prediction. A promising solution is ensemble learning - Nina et al [1] first argue that one can obtain a fair model by bagging a set of standard models. However, they do not present any empirical evidence or discuss effective ensemble strategy for fair learning. In this paper, we propose a new ensemble strategy for fair learning. It adopts the AdaBoost framework, but unlike AdaBoost that upweights mispredicted instances, it upweights unfairly predicted instances which we identify using a variant of Luong's k-NN based situation testing method [2]. Through experiments on two real-world data sets, we show our proposed strategy achieves higher fairness than the bagging strategy discussed by Nina et al and several baseline methods. Our results also suggest standard ensemble strategies may not be sufficient for improving fairness.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信