{"title":"通过模型集成提高预测公平性","authors":"Dheeraj Bhaskaruni, Hui Hu, Chao Lan","doi":"10.1109/ICTAI.2019.00273","DOIUrl":null,"url":null,"abstract":"Fair machine learning is a topical problem. It studies how to mitigate unethical bias against minority people in model prediction. A promising solution is ensemble learning - Nina et al [1] first argue that one can obtain a fair model by bagging a set of standard models. However, they do not present any empirical evidence or discuss effective ensemble strategy for fair learning. In this paper, we propose a new ensemble strategy for fair learning. It adopts the AdaBoost framework, but unlike AdaBoost that upweights mispredicted instances, it upweights unfairly predicted instances which we identify using a variant of Luong's k-NN based situation testing method [2]. Through experiments on two real-world data sets, we show our proposed strategy achieves higher fairness than the bagging strategy discussed by Nina et al and several baseline methods. Our results also suggest standard ensemble strategies may not be sufficient for improving fairness.","PeriodicalId":346657,"journal":{"name":"2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"16","resultStr":"{\"title\":\"Improving Prediction Fairness via Model Ensemble\",\"authors\":\"Dheeraj Bhaskaruni, Hui Hu, Chao Lan\",\"doi\":\"10.1109/ICTAI.2019.00273\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Fair machine learning is a topical problem. It studies how to mitigate unethical bias against minority people in model prediction. A promising solution is ensemble learning - Nina et al [1] first argue that one can obtain a fair model by bagging a set of standard models. However, they do not present any empirical evidence or discuss effective ensemble strategy for fair learning. In this paper, we propose a new ensemble strategy for fair learning. It adopts the AdaBoost framework, but unlike AdaBoost that upweights mispredicted instances, it upweights unfairly predicted instances which we identify using a variant of Luong's k-NN based situation testing method [2]. Through experiments on two real-world data sets, we show our proposed strategy achieves higher fairness than the bagging strategy discussed by Nina et al and several baseline methods. Our results also suggest standard ensemble strategies may not be sufficient for improving fairness.\",\"PeriodicalId\":346657,\"journal\":{\"name\":\"2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI)\",\"volume\":\"3 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"16\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICTAI.2019.00273\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICTAI.2019.00273","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Fair machine learning is a topical problem. It studies how to mitigate unethical bias against minority people in model prediction. A promising solution is ensemble learning - Nina et al [1] first argue that one can obtain a fair model by bagging a set of standard models. However, they do not present any empirical evidence or discuss effective ensemble strategy for fair learning. In this paper, we propose a new ensemble strategy for fair learning. It adopts the AdaBoost framework, but unlike AdaBoost that upweights mispredicted instances, it upweights unfairly predicted instances which we identify using a variant of Luong's k-NN based situation testing method [2]. Through experiments on two real-world data sets, we show our proposed strategy achieves higher fairness than the bagging strategy discussed by Nina et al and several baseline methods. Our results also suggest standard ensemble strategies may not be sufficient for improving fairness.