用于去偏分类模型的多样性块

Shruti Nagpal, Maneet Singh, Richa Singh, Mayank Vatsa
{"title":"用于去偏分类模型的多样性块","authors":"Shruti Nagpal, Maneet Singh, Richa Singh, Mayank Vatsa","doi":"10.1109/IJCB48548.2020.9304931","DOIUrl":null,"url":null,"abstract":"Recent studies have highlighted a major caveat in various high performing automated systems for tasks such as facial analysis (e.g. gender prediction), object classification, and image to caption generation. Several of the existing systems have been shown to yield biased results towards or against a particular subgroup. The biased behavior exhibited by these models when deployed and used in a real world scenario presents with the challenge of automated systems being unfair. In this research, we propose a novel technique, diversity block, for de-biasing existing models without re-training them. The proposed technique requires small amount of training data and can be incorporated with an existing model for addressing the challenge of biased predictions. This is done by adding a diversity block and computing the prediction based on the scores of the original model and the diversity block in order to get a more confident and de-biased prediction. The efficacy of the proposed technique has been demonstrated on the task of gender prediction, along with an auxiliary case study on object classification.","PeriodicalId":417270,"journal":{"name":"2020 IEEE International Joint Conference on Biometrics (IJCB)","volume":"515 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":"{\"title\":\"Diversity Blocks for De-biasing Classification Models\",\"authors\":\"Shruti Nagpal, Maneet Singh, Richa Singh, Mayank Vatsa\",\"doi\":\"10.1109/IJCB48548.2020.9304931\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Recent studies have highlighted a major caveat in various high performing automated systems for tasks such as facial analysis (e.g. gender prediction), object classification, and image to caption generation. Several of the existing systems have been shown to yield biased results towards or against a particular subgroup. The biased behavior exhibited by these models when deployed and used in a real world scenario presents with the challenge of automated systems being unfair. In this research, we propose a novel technique, diversity block, for de-biasing existing models without re-training them. The proposed technique requires small amount of training data and can be incorporated with an existing model for addressing the challenge of biased predictions. This is done by adding a diversity block and computing the prediction based on the scores of the original model and the diversity block in order to get a more confident and de-biased prediction. The efficacy of the proposed technique has been demonstrated on the task of gender prediction, along with an auxiliary case study on object classification.\",\"PeriodicalId\":417270,\"journal\":{\"name\":\"2020 IEEE International Joint Conference on Biometrics (IJCB)\",\"volume\":\"515 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-09-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"6\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 IEEE International Joint Conference on Biometrics (IJCB)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IJCB48548.2020.9304931\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE International Joint Conference on Biometrics (IJCB)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IJCB48548.2020.9304931","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6

摘要

最近的研究强调了各种高性能自动化系统在面部分析(例如性别预测)、对象分类和图像到标题生成等任务中的一个主要警告。一些现有的系统已被证明产生偏向或反对某一特定群体的结果。这些模型在实际场景中部署和使用时所表现出的偏差行为提出了自动化系统不公平的挑战。在这项研究中,我们提出了一种新的技术,即多样性块,用于在不重新训练的情况下对现有模型进行去偏。所提出的技术需要少量的训练数据,并且可以与现有模型相结合,以解决有偏见预测的挑战。这是通过添加一个多样性块并根据原始模型和多样性块的分数计算预测来实现的,以获得更自信和去偏见的预测。所提出的技术的有效性已经证明了性别预测的任务,以及一个辅助的案例研究对象分类。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Diversity Blocks for De-biasing Classification Models
Recent studies have highlighted a major caveat in various high performing automated systems for tasks such as facial analysis (e.g. gender prediction), object classification, and image to caption generation. Several of the existing systems have been shown to yield biased results towards or against a particular subgroup. The biased behavior exhibited by these models when deployed and used in a real world scenario presents with the challenge of automated systems being unfair. In this research, we propose a novel technique, diversity block, for de-biasing existing models without re-training them. The proposed technique requires small amount of training data and can be incorporated with an existing model for addressing the challenge of biased predictions. This is done by adding a diversity block and computing the prediction based on the scores of the original model and the diversity block in order to get a more confident and de-biased prediction. The efficacy of the proposed technique has been demonstrated on the task of gender prediction, along with an auxiliary case study on object classification.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信