Comparison-Level Mitigation of Ethnic Bias in Face Recognition

Philipp Terhörst, M. Tran, N. Damer, Florian Kirchbuchner, Arjan Kuijper
{"title":"Comparison-Level Mitigation of Ethnic Bias in Face Recognition","authors":"Philipp Terhörst, M. Tran, N. Damer, Florian Kirchbuchner, Arjan Kuijper","doi":"10.1109/IWBF49977.2020.9107956","DOIUrl":null,"url":null,"abstract":"Current face recognition systems achieve high performance on several benchmark tests. Despite this progress, recent works showed that these systems are strongly biased against demographic sub-groups. Previous works introduced approaches that aim at learning less biased representations. However, applying these approaches in real applications requires a complete replacement of the templates in the database. This replacement procedure further requires that a face image of each enrolled individual is stored as well. In this work, we propose the first bias-mitigating solution that works on the comparison-level of a biometric system. We propose a fairness- driven neural network classifier for the comparison of two biometric templates to replace the systems similarity function. This fair classifier is trained with a novel penalization term in the loss function to introduce the criteria of group and individual fairness to the decision process. This penalization term forces the score distributions of different ethnicities to be similar, leading to a reduction of the intra-ethnic performance differences. Experiments were conducted on two publicly available datasets and evaluated the performance of four different ethnicities. The results showed that for both fairness criteria, our proposed approach is able to significantly reduce the ethnic bias, while it preserves a high recognition ability. Our model, build on individual fairness, achieves bias reduction rate between 15.35% and 52.67%. In contrast to previous work, our solution is easy to integrate into existing systems by simply replacing the systems similarity functions with our fair template comparison approach.","PeriodicalId":174654,"journal":{"name":"2020 8th International Workshop on Biometrics and Forensics (IWBF)","volume":"142 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"23","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 8th International Workshop on Biometrics and Forensics (IWBF)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IWBF49977.2020.9107956","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 23

Abstract

Current face recognition systems achieve high performance on several benchmark tests. Despite this progress, recent works showed that these systems are strongly biased against demographic sub-groups. Previous works introduced approaches that aim at learning less biased representations. However, applying these approaches in real applications requires a complete replacement of the templates in the database. This replacement procedure further requires that a face image of each enrolled individual is stored as well. In this work, we propose the first bias-mitigating solution that works on the comparison-level of a biometric system. We propose a fairness- driven neural network classifier for the comparison of two biometric templates to replace the systems similarity function. This fair classifier is trained with a novel penalization term in the loss function to introduce the criteria of group and individual fairness to the decision process. This penalization term forces the score distributions of different ethnicities to be similar, leading to a reduction of the intra-ethnic performance differences. Experiments were conducted on two publicly available datasets and evaluated the performance of four different ethnicities. The results showed that for both fairness criteria, our proposed approach is able to significantly reduce the ethnic bias, while it preserves a high recognition ability. Our model, build on individual fairness, achieves bias reduction rate between 15.35% and 52.67%. In contrast to previous work, our solution is easy to integrate into existing systems by simply replacing the systems similarity functions with our fair template comparison approach.
人脸识别中种族偏见的比较缓解
目前的人脸识别系统在几个基准测试中取得了很高的性能。尽管取得了这些进展,但最近的研究表明,这些系统对人口分组存在强烈偏见。以前的工作介绍了旨在学习较少偏见表征的方法。但是,在实际应用程序中应用这些方法需要完全替换数据库中的模板。这个替换过程进一步要求存储每个注册个体的面部图像。在这项工作中,我们提出了第一个在生物识别系统的比较水平上工作的减轻偏差的解决方案。我们提出了一个公平驱动的神经网络分类器,用于两个生物特征模板的比较,以取代系统相似函数。该分类器在损失函数中加入新的惩罚项,将群体和个人公平性的标准引入决策过程。这个惩罚项迫使不同种族的分数分布趋于相似,从而减少了种族内部的表现差异。实验在两个公开的数据集上进行,并评估了四个不同种族的表现。结果表明,对于两种公平标准,我们的方法都能显著减少种族偏见,同时保持较高的识别能力。我们的模型建立在个人公平的基础上,偏见减少率在15.35%到52.67%之间。与以前的工作相比,我们的解决方案很容易集成到现有系统中,只需用我们的公平模板比较方法替换系统相似性函数。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信