利用双分支网络进行隐式互学以实现人脸超分辨率

Kangli Zeng;Zhongyuan Wang;Tao Lu;Jianyu Chen;Zheng He;Zhen Han
{"title":"利用双分支网络进行隐式互学以实现人脸超分辨率","authors":"Kangli Zeng;Zhongyuan Wang;Tao Lu;Jianyu Chen;Zheng He;Zhen Han","doi":"10.1109/TBIOM.2024.3354333","DOIUrl":null,"url":null,"abstract":"Face super-resolution (SR) algorithms have recently made significant progress. However, most existing methods prefer to employ texture and structure information together to promote the generation of high-resolution features, neglecting the mutual encouragement between them, as well as the effective unification of their own low-level and high-level information, thus yielding unsatisfactory results. To address these problems, we propose an implicit mutual learning of dual-branch networks for face super-resolution, which adequately considers both extraction and aggregation of structure and texture information. The proposed approach consists of four essential blocks. First, the deep feature extractor is equipped with a deep feature reinforcement module (DFRM) based on two-stage cross-dimensional attention (TCA), which behaves in the texture enhancement and structure reconstruction branches, respectively. Then, we elaborate two information exchange blocks for two branches, one for the first information exchange block (FIEB) from the texture branch to the structure branch and one for the second information exchange block (SIEB) from the structure branch to the texture branch. These two interaction blocks perform further fusion enhancement of potential features. Finally, a hybrid fusion network (HFNet) based on supervised attention executes adaptive aggregation of the enhanced texture and structure maps. Additionally, we use a joint loss function that modifies the recovery of structure information, diminishes the use of potentially erroneous information, and encourages the generation of realistic face images. Experiments on public datasets show that our method consistently achieves better quantitative and qualitative results than SOTA methods.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"6 2","pages":"182-194"},"PeriodicalIF":0.0000,"publicationDate":"2024-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Implicit Mutual Learning With Dual-Branch Networks for Face Super-Resolution\",\"authors\":\"Kangli Zeng;Zhongyuan Wang;Tao Lu;Jianyu Chen;Zheng He;Zhen Han\",\"doi\":\"10.1109/TBIOM.2024.3354333\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Face super-resolution (SR) algorithms have recently made significant progress. However, most existing methods prefer to employ texture and structure information together to promote the generation of high-resolution features, neglecting the mutual encouragement between them, as well as the effective unification of their own low-level and high-level information, thus yielding unsatisfactory results. To address these problems, we propose an implicit mutual learning of dual-branch networks for face super-resolution, which adequately considers both extraction and aggregation of structure and texture information. The proposed approach consists of four essential blocks. First, the deep feature extractor is equipped with a deep feature reinforcement module (DFRM) based on two-stage cross-dimensional attention (TCA), which behaves in the texture enhancement and structure reconstruction branches, respectively. Then, we elaborate two information exchange blocks for two branches, one for the first information exchange block (FIEB) from the texture branch to the structure branch and one for the second information exchange block (SIEB) from the structure branch to the texture branch. These two interaction blocks perform further fusion enhancement of potential features. Finally, a hybrid fusion network (HFNet) based on supervised attention executes adaptive aggregation of the enhanced texture and structure maps. Additionally, we use a joint loss function that modifies the recovery of structure information, diminishes the use of potentially erroneous information, and encourages the generation of realistic face images. Experiments on public datasets show that our method consistently achieves better quantitative and qualitative results than SOTA methods.\",\"PeriodicalId\":73307,\"journal\":{\"name\":\"IEEE transactions on biometrics, behavior, and identity science\",\"volume\":\"6 2\",\"pages\":\"182-194\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-01-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE transactions on biometrics, behavior, and identity science\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10409565/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on biometrics, behavior, and identity science","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10409565/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

人脸超分辨率(SR)算法近年来取得了重大进展。然而,现有方法大多偏向于同时利用纹理和结构信息来促进高分辨率特征的生成,忽视了它们之间的相互促进作用,也忽视了它们自身低层次信息和高层次信息的有效统一,因此效果并不理想。针对这些问题,我们提出了一种用于人脸超分辨率的双分支网络隐式相互学习方法,该方法充分考虑了结构和纹理信息的提取和聚合。所提出的方法由四个基本模块组成。首先,深度特征提取器配备了一个基于两级跨维注意力(TCA)的深度特征强化模块(DFRM),分别作用于纹理增强和结构重建分支。然后,我们为两个分支设计了两个信息交换块,一个是从纹理分支到结构分支的第一个信息交换块(FIEB),另一个是从结构分支到纹理分支的第二个信息交换块(SIEB)。这两个交互块对潜在特征进行进一步的融合增强。最后,基于监督注意力的混合融合网络(HFNet)会对增强的纹理和结构图进行自适应聚合。此外,我们还使用了一个联合损失函数,该函数可修改结构信息的恢复,减少潜在错误信息的使用,并鼓励生成逼真的人脸图像。在公共数据集上进行的实验表明,我们的方法在定量和定性方面始终比 SOTA 方法取得更好的结果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Implicit Mutual Learning With Dual-Branch Networks for Face Super-Resolution
Face super-resolution (SR) algorithms have recently made significant progress. However, most existing methods prefer to employ texture and structure information together to promote the generation of high-resolution features, neglecting the mutual encouragement between them, as well as the effective unification of their own low-level and high-level information, thus yielding unsatisfactory results. To address these problems, we propose an implicit mutual learning of dual-branch networks for face super-resolution, which adequately considers both extraction and aggregation of structure and texture information. The proposed approach consists of four essential blocks. First, the deep feature extractor is equipped with a deep feature reinforcement module (DFRM) based on two-stage cross-dimensional attention (TCA), which behaves in the texture enhancement and structure reconstruction branches, respectively. Then, we elaborate two information exchange blocks for two branches, one for the first information exchange block (FIEB) from the texture branch to the structure branch and one for the second information exchange block (SIEB) from the structure branch to the texture branch. These two interaction blocks perform further fusion enhancement of potential features. Finally, a hybrid fusion network (HFNet) based on supervised attention executes adaptive aggregation of the enhanced texture and structure maps. Additionally, we use a joint loss function that modifies the recovery of structure information, diminishes the use of potentially erroneous information, and encourages the generation of realistic face images. Experiments on public datasets show that our method consistently achieves better quantitative and qualitative results than SOTA methods.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
10.90
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信