用于人脸超分辨率的解码器结构引导的 CNN 变换器网络

IF 1.5 4区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Rui Dou, Jiawen Li, Xujie Wan, Heyou Chang, Hao Zheng, Guangwei Gao
{"title":"用于人脸超分辨率的解码器结构引导的 CNN 变换器网络","authors":"Rui Dou,&nbsp;Jiawen Li,&nbsp;Xujie Wan,&nbsp;Heyou Chang,&nbsp;Hao Zheng,&nbsp;Guangwei Gao","doi":"10.1049/cvi2.12251","DOIUrl":null,"url":null,"abstract":"<p>Recent advances in deep convolutional neural networks have shown improved performance in face super-resolution through joint training with other tasks such as face analysis and landmark prediction. However, these methods have certain limitations. One major limitation is the requirement for manual marking information on the dataset for multi-task joint learning. This additional marking process increases the computational cost of the network model. Additionally, since prior information is often estimated from low-quality faces, the obtained guidance information tends to be inaccurate. To address these challenges, a novel Decoder Structure Guided CNN-Transformer Network (DCTNet) is introduced, which utilises the newly proposed Global-Local Feature Extraction Unit (GLFEU) for effective embedding. Specifically, the proposed GLFEU is composed of an attention branch and a Transformer branch, to simultaneously restore global facial structure and local texture details. Additionally, a Multi-Stage Feature Fusion Module is incorporated to fuse features from different network stages, further improving the quality of the restored face images. Compared with previous methods, DCTNet improves Peak Signal-to-Noise Ratio by 0.23 and 0.19 dB on the CelebA and Helen datasets, respectively. Experimental results demonstrate that the designed DCTNet offers a simple yet powerful solution to recover detailed facial structures from low-quality images.</p>","PeriodicalId":56304,"journal":{"name":"IET Computer Vision","volume":"18 4","pages":"473-484"},"PeriodicalIF":1.5000,"publicationDate":"2023-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cvi2.12251","citationCount":"0","resultStr":"{\"title\":\"A Decoder Structure Guided CNN-Transformer Network for face super-resolution\",\"authors\":\"Rui Dou,&nbsp;Jiawen Li,&nbsp;Xujie Wan,&nbsp;Heyou Chang,&nbsp;Hao Zheng,&nbsp;Guangwei Gao\",\"doi\":\"10.1049/cvi2.12251\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Recent advances in deep convolutional neural networks have shown improved performance in face super-resolution through joint training with other tasks such as face analysis and landmark prediction. However, these methods have certain limitations. One major limitation is the requirement for manual marking information on the dataset for multi-task joint learning. This additional marking process increases the computational cost of the network model. Additionally, since prior information is often estimated from low-quality faces, the obtained guidance information tends to be inaccurate. To address these challenges, a novel Decoder Structure Guided CNN-Transformer Network (DCTNet) is introduced, which utilises the newly proposed Global-Local Feature Extraction Unit (GLFEU) for effective embedding. Specifically, the proposed GLFEU is composed of an attention branch and a Transformer branch, to simultaneously restore global facial structure and local texture details. Additionally, a Multi-Stage Feature Fusion Module is incorporated to fuse features from different network stages, further improving the quality of the restored face images. Compared with previous methods, DCTNet improves Peak Signal-to-Noise Ratio by 0.23 and 0.19 dB on the CelebA and Helen datasets, respectively. Experimental results demonstrate that the designed DCTNet offers a simple yet powerful solution to recover detailed facial structures from low-quality images.</p>\",\"PeriodicalId\":56304,\"journal\":{\"name\":\"IET Computer Vision\",\"volume\":\"18 4\",\"pages\":\"473-484\"},\"PeriodicalIF\":1.5000,\"publicationDate\":\"2023-11-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cvi2.12251\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IET Computer Vision\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1049/cvi2.12251\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IET Computer Vision","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1049/cvi2.12251","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

深度卷积神经网络的最新进展表明,通过与人脸分析和地标预测等其他任务进行联合训练,人脸超分辨率的性能得到了提高。然而,这些方法都有一定的局限性。其中一个主要限制是,多任务联合学习需要在数据集上手动标记信息。这一额外的标记过程增加了网络模型的计算成本。此外,由于先验信息通常是从低质量的人脸中估算出来的,因此获得的引导信息往往不准确。为了应对这些挑战,我们引入了一种新型解码器结构引导的 CNN 变换器网络(DCTNet),它利用新提出的全局-局部特征提取单元(GLFEU)进行有效嵌入。具体来说,拟议的 GLFEU 由注意力分支和变换器分支组成,可同时还原全局面部结构和局部纹理细节。此外,还加入了多阶段特征融合模块,将不同网络阶段的特征进行融合,进一步提高了还原人脸图像的质量。与之前的方法相比,DCTNet 在 CelebA 和 Helen 数据集上的峰值信噪比分别提高了 0.23 和 0.19 dB。实验结果表明,所设计的 DCTNet 为从低质量图像中恢复详细的面部结构提供了一种简单而强大的解决方案。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

A Decoder Structure Guided CNN-Transformer Network for face super-resolution

A Decoder Structure Guided CNN-Transformer Network for face super-resolution

Recent advances in deep convolutional neural networks have shown improved performance in face super-resolution through joint training with other tasks such as face analysis and landmark prediction. However, these methods have certain limitations. One major limitation is the requirement for manual marking information on the dataset for multi-task joint learning. This additional marking process increases the computational cost of the network model. Additionally, since prior information is often estimated from low-quality faces, the obtained guidance information tends to be inaccurate. To address these challenges, a novel Decoder Structure Guided CNN-Transformer Network (DCTNet) is introduced, which utilises the newly proposed Global-Local Feature Extraction Unit (GLFEU) for effective embedding. Specifically, the proposed GLFEU is composed of an attention branch and a Transformer branch, to simultaneously restore global facial structure and local texture details. Additionally, a Multi-Stage Feature Fusion Module is incorporated to fuse features from different network stages, further improving the quality of the restored face images. Compared with previous methods, DCTNet improves Peak Signal-to-Noise Ratio by 0.23 and 0.19 dB on the CelebA and Helen datasets, respectively. Experimental results demonstrate that the designed DCTNet offers a simple yet powerful solution to recover detailed facial structures from low-quality images.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
IET Computer Vision
IET Computer Vision 工程技术-工程:电子与电气
CiteScore
3.30
自引率
11.80%
发文量
76
审稿时长
3.4 months
期刊介绍: IET Computer Vision seeks original research papers in a wide range of areas of computer vision. The vision of the journal is to publish the highest quality research work that is relevant and topical to the field, but not forgetting those works that aim to introduce new horizons and set the agenda for future avenues of research in computer vision. IET Computer Vision welcomes submissions on the following topics: Biologically and perceptually motivated approaches to low level vision (feature detection, etc.); Perceptual grouping and organisation Representation, analysis and matching of 2D and 3D shape Shape-from-X Object recognition Image understanding Learning with visual inputs Motion analysis and object tracking Multiview scene analysis Cognitive approaches in low, mid and high level vision Control in visual systems Colour, reflectance and light Statistical and probabilistic models Face and gesture Surveillance Biometrics and security Robotics Vehicle guidance Automatic model aquisition Medical image analysis and understanding Aerial scene analysis and remote sensing Deep learning models in computer vision Both methodological and applications orientated papers are welcome. Manuscripts submitted are expected to include a detailed and analytical review of the literature and state-of-the-art exposition of the original proposed research and its methodology, its thorough experimental evaluation, and last but not least, comparative evaluation against relevant and state-of-the-art methods. Submissions not abiding by these minimum requirements may be returned to authors without being sent to review. Special Issues Current Call for Papers: Computer Vision for Smart Cameras and Camera Networks - https://digital-library.theiet.org/files/IET_CVI_SC.pdf Computer Vision for the Creative Industries - https://digital-library.theiet.org/files/IET_CVI_CVCI.pdf
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信