Global Cross-Entropy Loss for Deep Face Recognition

Weisong Zhao;Xiangyu Zhu;Haichao Shi;Xiao-Yu Zhang;Guoying Zhao;Zhen Lei
{"title":"Global Cross-Entropy Loss for Deep Face Recognition","authors":"Weisong Zhao;Xiangyu Zhu;Haichao Shi;Xiao-Yu Zhang;Guoying Zhao;Zhen Lei","doi":"10.1109/TIP.2025.3546481","DOIUrl":null,"url":null,"abstract":"Contemporary deep face recognition techniques predominantly utilize the Softmax loss function, designed based on the similarities between sample features and class prototypes. These similarities can be categorized into four types: in-sample target similarity, in-sample non-target similarity, out-sample target similarity, and out-sample non-target similarity. When a sample feature from a specific class is designated as the anchor, the similarity between this sample and any class prototype is referred to as in-sample similarity. In contrast, the similarity between samples from other classes and any class prototype is known as out-sample similarity. The terms target and non-target indicate whether the sample and the class prototype used for similarity calculation belong to the same identity or not. The conventional Softmax loss function promotes higher in-sample target similarity than in-sample non-target similarity. However, it overlooks the relation between in-sample and out-sample similarity. In this paper, we propose Global Cross-Entropy loss (GCE), which promotes 1) greater in-sample target similarity over both the in-sample and out-sample non-target similarity, and 2) smaller in-sample non-target similarity to both in-sample and out-sample target similarity. In addition, we propose to establish a bilateral margin penalty for both in-sample target and non-target similarity, so that the discrimination and generalization of the deep face model are improved. To bridge the gap between training and testing of face recognition, we adapt the GCE loss into a pairwise framework by randomly replacing some class prototypes with sample features. We designate the model trained with the proposed Global Cross-Entropy loss as GFace. Extensive experiments on several public face benchmarks, including LFW, CALFW, CPLFW, CFP-FP, AgeDB, IJB-C, IJB-B, MFR-Ongoing, and MegaFace, demonstrate the superiority of GFace over other methods. Additionally, GFace exhibits robust performance in general visual recognition task.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"1672-1685"},"PeriodicalIF":0.0000,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10914510/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Contemporary deep face recognition techniques predominantly utilize the Softmax loss function, designed based on the similarities between sample features and class prototypes. These similarities can be categorized into four types: in-sample target similarity, in-sample non-target similarity, out-sample target similarity, and out-sample non-target similarity. When a sample feature from a specific class is designated as the anchor, the similarity between this sample and any class prototype is referred to as in-sample similarity. In contrast, the similarity between samples from other classes and any class prototype is known as out-sample similarity. The terms target and non-target indicate whether the sample and the class prototype used for similarity calculation belong to the same identity or not. The conventional Softmax loss function promotes higher in-sample target similarity than in-sample non-target similarity. However, it overlooks the relation between in-sample and out-sample similarity. In this paper, we propose Global Cross-Entropy loss (GCE), which promotes 1) greater in-sample target similarity over both the in-sample and out-sample non-target similarity, and 2) smaller in-sample non-target similarity to both in-sample and out-sample target similarity. In addition, we propose to establish a bilateral margin penalty for both in-sample target and non-target similarity, so that the discrimination and generalization of the deep face model are improved. To bridge the gap between training and testing of face recognition, we adapt the GCE loss into a pairwise framework by randomly replacing some class prototypes with sample features. We designate the model trained with the proposed Global Cross-Entropy loss as GFace. Extensive experiments on several public face benchmarks, including LFW, CALFW, CPLFW, CFP-FP, AgeDB, IJB-C, IJB-B, MFR-Ongoing, and MegaFace, demonstrate the superiority of GFace over other methods. Additionally, GFace exhibits robust performance in general visual recognition task.
基于全局交叉熵损失的深度人脸识别。
当代深度人脸识别技术主要利用Softmax损失函数,该函数是基于样本特征和类原型之间的相似性而设计的。这些相似度可以分为四类:样本内目标相似度、样本内非目标相似度、样本外目标相似度和样本外非目标相似度。当指定来自特定类的样本特征作为锚点时,该样本与任何类原型之间的相似性称为样本内相似性。相反,来自其他类的样本与任何类原型之间的相似性被称为样本外相似性。目标和非目标表示用于相似度计算的样本和类原型是否属于同一身份。传统的Softmax损失函数比样本内非目标相似度提高了样本内目标相似度。然而,它忽略了样本内和样本外相似性之间的关系。在本文中,我们提出了全局交叉熵损失(Global Cross-Entropy loss, GCE),它使1)样本内目标相似度大于样本内和样本外非目标相似度,2)样本内非目标相似度小于样本内和样本外目标相似度。此外,我们提出建立样本内目标和非目标相似度的双边边际惩罚,从而提高深面模型的判别和泛化能力。为了缩小人脸识别训练和测试之间的差距,我们通过用样本特征随机替换一些类原型,将GCE损失调整为成对框架。我们将用所提出的全局交叉熵损失训练的模型命名为GFace。在LFW、CALFW、CPLFW、CFP-FP、AgeDB、IJB-C、IJB-B、MFR-Ongoing和megface等多个公共人脸基准测试中进行的大量实验表明,GFace优于其他方法。此外,GFace在一般视觉识别任务中表现出稳健的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信