{"title":"SPADNet: Structure Prior-Aware Dynamic Network for Face Super-Resolution","authors":"Chenyang Wang;Junjun Jiang;Kui Jiang;Xianming Liu","doi":"10.1109/TBIOM.2024.3382870","DOIUrl":null,"url":null,"abstract":"The recent emergence of deep learning neural networks has propelled advancements in the field of face super-resolution. While these deep learning-based methods have shown significant performance improvements, they depend overwhelmingly on fixed, spatially shared kernels within standard convolutional layers. This leads to a neglect of the diverse facial structures and regions, consequently struggling to reconstruct high-fidelity face images. As a highly structured object, the structural features of a face are crucial for representing and reconstructing face images. To this end, we introduce a structure prior-aware dynamic network (SPADNet) that leverages facial structure priors as a foundation to generate structure-aware dynamic kernels for the distinctive super-resolution of various face images. In view of that spatially shared kernels are not well-suited for specific-regions representation, a local structure-adaptive convolution (LSAC) is devised to characterize the local relation of facial features. It is more effective for precise texture representation. Meanwhile, a global structure-aware convolution (GSAC) is elaborated to capture the global facial contours to guarantee the structure consistency. These strategies form a unified face reconstruction framework, which reconciles the distinct representation of diverse face images and individual structure fidelity. Extensive experiments confirm the superiority of our proposed SPADNet over state-of-the-art methods. The source codes of the proposed method will be available at \n<uri>https://github.com/wcy-cs/SPADNet</uri>\n.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"6 3","pages":"326-340"},"PeriodicalIF":0.0000,"publicationDate":"2024-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on biometrics, behavior, and identity science","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10485196/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The recent emergence of deep learning neural networks has propelled advancements in the field of face super-resolution. While these deep learning-based methods have shown significant performance improvements, they depend overwhelmingly on fixed, spatially shared kernels within standard convolutional layers. This leads to a neglect of the diverse facial structures and regions, consequently struggling to reconstruct high-fidelity face images. As a highly structured object, the structural features of a face are crucial for representing and reconstructing face images. To this end, we introduce a structure prior-aware dynamic network (SPADNet) that leverages facial structure priors as a foundation to generate structure-aware dynamic kernels for the distinctive super-resolution of various face images. In view of that spatially shared kernels are not well-suited for specific-regions representation, a local structure-adaptive convolution (LSAC) is devised to characterize the local relation of facial features. It is more effective for precise texture representation. Meanwhile, a global structure-aware convolution (GSAC) is elaborated to capture the global facial contours to guarantee the structure consistency. These strategies form a unified face reconstruction framework, which reconciles the distinct representation of diverse face images and individual structure fidelity. Extensive experiments confirm the superiority of our proposed SPADNet over state-of-the-art methods. The source codes of the proposed method will be available at
https://github.com/wcy-cs/SPADNet
.