为极低分辨率人脸识别提取生成-鉴别表征

Junzheng Zhang, Weijia Guo, Bochao Liu, Ruixin Shi, Yong Li, Shiming Ge
{"title":"为极低分辨率人脸识别提取生成-鉴别表征","authors":"Junzheng Zhang, Weijia Guo, Bochao Liu, Ruixin Shi, Yong Li, Shiming Ge","doi":"arxiv-2409.06371","DOIUrl":null,"url":null,"abstract":"Very low-resolution face recognition is challenging due to the serious loss\nof informative facial details in resolution degradation. In this paper, we\npropose a generative-discriminative representation distillation approach that\ncombines generative representation with cross-resolution aligned knowledge\ndistillation. This approach facilitates very low-resolution face recognition by\njointly distilling generative and discriminative models via two distillation\nmodules. Firstly, the generative representation distillation takes the encoder\nof a diffusion model pretrained for face super-resolution as the generative\nteacher to supervise the learning of the student backbone via feature\nregression, and then freezes the student backbone. After that, the\ndiscriminative representation distillation further considers a pretrained face\nrecognizer as the discriminative teacher to supervise the learning of the\nstudent head via cross-resolution relational contrastive distillation. In this\nway, the general backbone representation can be transformed into discriminative\nhead representation, leading to a robust and discriminative student model for\nvery low-resolution face recognition. Our approach improves the recovery of the\nmissing details in very low-resolution faces and achieves better knowledge\ntransfer. Extensive experiments on face datasets demonstrate that our approach\nenhances the recognition accuracy of very low-resolution faces, showcasing its\neffectiveness and adaptability.","PeriodicalId":501480,"journal":{"name":"arXiv - CS - Multimedia","volume":"11 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Distilling Generative-Discriminative Representations for Very Low-Resolution Face Recognition\",\"authors\":\"Junzheng Zhang, Weijia Guo, Bochao Liu, Ruixin Shi, Yong Li, Shiming Ge\",\"doi\":\"arxiv-2409.06371\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Very low-resolution face recognition is challenging due to the serious loss\\nof informative facial details in resolution degradation. In this paper, we\\npropose a generative-discriminative representation distillation approach that\\ncombines generative representation with cross-resolution aligned knowledge\\ndistillation. This approach facilitates very low-resolution face recognition by\\njointly distilling generative and discriminative models via two distillation\\nmodules. Firstly, the generative representation distillation takes the encoder\\nof a diffusion model pretrained for face super-resolution as the generative\\nteacher to supervise the learning of the student backbone via feature\\nregression, and then freezes the student backbone. After that, the\\ndiscriminative representation distillation further considers a pretrained face\\nrecognizer as the discriminative teacher to supervise the learning of the\\nstudent head via cross-resolution relational contrastive distillation. In this\\nway, the general backbone representation can be transformed into discriminative\\nhead representation, leading to a robust and discriminative student model for\\nvery low-resolution face recognition. Our approach improves the recovery of the\\nmissing details in very low-resolution faces and achieves better knowledge\\ntransfer. Extensive experiments on face datasets demonstrate that our approach\\nenhances the recognition accuracy of very low-resolution faces, showcasing its\\neffectiveness and adaptability.\",\"PeriodicalId\":501480,\"journal\":{\"name\":\"arXiv - CS - Multimedia\",\"volume\":\"11 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Multimedia\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.06371\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Multimedia","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.06371","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

极低分辨率的人脸识别具有挑战性,因为在分辨率下降的过程中,面部信息细节会严重丢失。在本文中,我们提出了一种生成-判别表征蒸馏方法,它将生成表征与跨分辨率对齐知识蒸馏相结合。这种方法通过两个蒸馏模块联合蒸馏生成模型和判别模型,从而促进了极低分辨率的人脸识别。首先,生成性表征蒸馏将针对人脸超分辨率预训练的扩散模型的编码器作为生成性教师,通过特征回归监督学生骨干的学习,然后冻结学生骨干。之后,判别表征蒸馏进一步考虑将预训练的人脸识别器作为判别教师,通过交叉分辨率关系对比蒸馏监督学生头部的学习。通过这种方法,一般的骨干表征可以转化为判别性头部表征,从而为极低分辨率的人脸识别提供稳健且具有判别性的学生模型。我们的方法提高了对极低分辨率人脸中缺失细节的恢复能力,并实现了更好的知识转移。在人脸数据集上的广泛实验证明,我们的方法提高了极低分辨率人脸的识别准确率,展示了它的有效性和适应性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Distilling Generative-Discriminative Representations for Very Low-Resolution Face Recognition
Very low-resolution face recognition is challenging due to the serious loss of informative facial details in resolution degradation. In this paper, we propose a generative-discriminative representation distillation approach that combines generative representation with cross-resolution aligned knowledge distillation. This approach facilitates very low-resolution face recognition by jointly distilling generative and discriminative models via two distillation modules. Firstly, the generative representation distillation takes the encoder of a diffusion model pretrained for face super-resolution as the generative teacher to supervise the learning of the student backbone via feature regression, and then freezes the student backbone. After that, the discriminative representation distillation further considers a pretrained face recognizer as the discriminative teacher to supervise the learning of the student head via cross-resolution relational contrastive distillation. In this way, the general backbone representation can be transformed into discriminative head representation, leading to a robust and discriminative student model for very low-resolution face recognition. Our approach improves the recovery of the missing details in very low-resolution faces and achieves better knowledge transfer. Extensive experiments on face datasets demonstrate that our approach enhances the recognition accuracy of very low-resolution faces, showcasing its effectiveness and adaptability.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信