Kangli Zeng;Zhongyuan Wang;Tao Lu;Jianyu Chen;Chao Liang;Zhen Han
{"title":"Multi-Stage Statistical Texture-Guided GAN for Tilted Face Frontalization","authors":"Kangli Zeng;Zhongyuan Wang;Tao Lu;Jianyu Chen;Chao Liang;Zhen Han","doi":"10.1109/TIP.2025.3548896","DOIUrl":null,"url":null,"abstract":"Existing pose-invariant face recognition mainly focuses on frontal or profile, whereas high-pitch angle face recognition, prevalent under surveillance videos, has yet to be investigated. More importantly, tilted faces significantly differ from frontal or profile faces in the potential feature space due to self-occlusion, thus seriously affecting key feature extraction for face recognition. In this paper, we asymptotically reshape challenging high-pitch angle faces into a series of small-angle approximate frontal faces and exploit a statistical approach to learn texture features to ensure accurate facial component generation. In particular, we design a statistical texture-guided GAN for tilted face frontalization (STG-GAN) consisting of three main components. First, the face encoder extracts shallow features, followed by the face statistical texture modeling module that learns multi-scale face texture features based on the statistical distributions of the shallow features. Then, the face decoder performs feature deformation guided by the face statistical texture features while highlighting the pose-invariant face discriminative information. With the addition of multi-scale content loss, identity loss and adversarial loss, we further develop a pose contrastive loss of potential spatial features to constrain pose consistency and make its face frontalization process more reliable. On this basis, we propose a divide-and-conquer strategy, using STG-GAN to progressively synthesize faces with small pitch angles in multiple stages to achieve frontalization gradually. A unified end-to-end training across multiple stages facilitates the generation of numerous intermediate results to achieve a reasonable approximation of the ground truth. Extensive qualitative and quantitative experiments on multiple-face datasets demonstrate the superiority of our approach.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"1726-1736"},"PeriodicalIF":0.0000,"publicationDate":"2025-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10925571/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Existing pose-invariant face recognition mainly focuses on frontal or profile, whereas high-pitch angle face recognition, prevalent under surveillance videos, has yet to be investigated. More importantly, tilted faces significantly differ from frontal or profile faces in the potential feature space due to self-occlusion, thus seriously affecting key feature extraction for face recognition. In this paper, we asymptotically reshape challenging high-pitch angle faces into a series of small-angle approximate frontal faces and exploit a statistical approach to learn texture features to ensure accurate facial component generation. In particular, we design a statistical texture-guided GAN for tilted face frontalization (STG-GAN) consisting of three main components. First, the face encoder extracts shallow features, followed by the face statistical texture modeling module that learns multi-scale face texture features based on the statistical distributions of the shallow features. Then, the face decoder performs feature deformation guided by the face statistical texture features while highlighting the pose-invariant face discriminative information. With the addition of multi-scale content loss, identity loss and adversarial loss, we further develop a pose contrastive loss of potential spatial features to constrain pose consistency and make its face frontalization process more reliable. On this basis, we propose a divide-and-conquer strategy, using STG-GAN to progressively synthesize faces with small pitch angles in multiple stages to achieve frontalization gradually. A unified end-to-end training across multiple stages facilitates the generation of numerous intermediate results to achieve a reasonable approximation of the ground truth. Extensive qualitative and quantitative experiments on multiple-face datasets demonstrate the superiority of our approach.