Tetsushi Ohki, Yuya Sato, Masakatsu Nishigaki, Koichi Ito
{"title":"LabellessFace:无属性标签人脸识别的公平度量学习","authors":"Tetsushi Ohki, Yuya Sato, Masakatsu Nishigaki, Koichi Ito","doi":"arxiv-2409.09274","DOIUrl":null,"url":null,"abstract":"Demographic bias is one of the major challenges for face recognition systems.\nThe majority of existing studies on demographic biases are heavily dependent on\nspecific demographic groups or demographic classifier, making it difficult to\naddress performance for unrecognised groups. This paper introduces\n``LabellessFace'', a novel framework that improves demographic bias in face\nrecognition without requiring demographic group labeling typically required for\nfairness considerations. We propose a novel fairness enhancement metric called\nthe class favoritism level, which assesses the extent of favoritism towards\nspecific classes across the dataset. Leveraging this metric, we introduce the\nfair class margin penalty, an extension of existing margin-based metric\nlearning. This method dynamically adjusts learning parameters based on class\nfavoritism levels, promoting fairness across all attributes. By treating each\nclass as an individual in facial recognition systems, we facilitate learning\nthat minimizes biases in authentication accuracy among individuals.\nComprehensive experiments have demonstrated that our proposed method is\neffective for enhancing fairness while maintaining authentication accuracy.","PeriodicalId":501112,"journal":{"name":"arXiv - CS - Computers and Society","volume":"213 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"LabellessFace: Fair Metric Learning for Face Recognition without Attribute Labels\",\"authors\":\"Tetsushi Ohki, Yuya Sato, Masakatsu Nishigaki, Koichi Ito\",\"doi\":\"arxiv-2409.09274\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Demographic bias is one of the major challenges for face recognition systems.\\nThe majority of existing studies on demographic biases are heavily dependent on\\nspecific demographic groups or demographic classifier, making it difficult to\\naddress performance for unrecognised groups. This paper introduces\\n``LabellessFace'', a novel framework that improves demographic bias in face\\nrecognition without requiring demographic group labeling typically required for\\nfairness considerations. We propose a novel fairness enhancement metric called\\nthe class favoritism level, which assesses the extent of favoritism towards\\nspecific classes across the dataset. Leveraging this metric, we introduce the\\nfair class margin penalty, an extension of existing margin-based metric\\nlearning. This method dynamically adjusts learning parameters based on class\\nfavoritism levels, promoting fairness across all attributes. By treating each\\nclass as an individual in facial recognition systems, we facilitate learning\\nthat minimizes biases in authentication accuracy among individuals.\\nComprehensive experiments have demonstrated that our proposed method is\\neffective for enhancing fairness while maintaining authentication accuracy.\",\"PeriodicalId\":501112,\"journal\":{\"name\":\"arXiv - CS - Computers and Society\",\"volume\":\"213 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Computers and Society\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.09274\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Computers and Society","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.09274","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
LabellessFace: Fair Metric Learning for Face Recognition without Attribute Labels
Demographic bias is one of the major challenges for face recognition systems.
The majority of existing studies on demographic biases are heavily dependent on
specific demographic groups or demographic classifier, making it difficult to
address performance for unrecognised groups. This paper introduces
``LabellessFace'', a novel framework that improves demographic bias in face
recognition without requiring demographic group labeling typically required for
fairness considerations. We propose a novel fairness enhancement metric called
the class favoritism level, which assesses the extent of favoritism towards
specific classes across the dataset. Leveraging this metric, we introduce the
fair class margin penalty, an extension of existing margin-based metric
learning. This method dynamically adjusts learning parameters based on class
favoritism levels, promoting fairness across all attributes. By treating each
class as an individual in facial recognition systems, we facilitate learning
that minimizes biases in authentication accuracy among individuals.
Comprehensive experiments have demonstrated that our proposed method is
effective for enhancing fairness while maintaining authentication accuracy.