Meng Xu , Prince Hamandawana , Xiaohan Ma , Zekang Chen , Rize Jin , Tae-Sun Chung
{"title":"FaceDisentGAN: Disentangled facial editing with targeted semantic alignment","authors":"Meng Xu , Prince Hamandawana , Xiaohan Ma , Zekang Chen , Rize Jin , Tae-Sun Chung","doi":"10.1016/j.neucom.2025.131706","DOIUrl":null,"url":null,"abstract":"<div><div>Facial attribute editing in generative adversarial networks (GANs) involves two essential objectives: (1) accurately modifying the desired facial attribute, and (2) avoiding the unintended modification of irrelevant facial attributes. To address these challenges, we propose FaceDisentGAN, a novel generative framework for disentangled facial attribute manipulation. Specifically, we introduce: (1) a disentanglement module that decomposes feature maps into orthogonal spatial components (vertical and horizontal) to isolate target-related and unrelated semantics; (2) a two-stage training strategy that first learns general facial representations and then refines them to balance generic feature learning with fine-grained detail preservation; and (3) two novel evaluation metrics—Overall Preservation Score (OPS) and Perfect Match Rate (PMR)—which measure, respectively, the average preservation of non-target attributes and the proportion of perfectly disentangled results. This combination provides both soft and strict assessments of disentanglement quality. Extensive experiments demonstrate that FaceDisentGAN achieves accurate target attribute editing while effectively minimizing feature entanglement, outperforming several existing methods in both visual fidelity and semantic control.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"658 ","pages":"Article 131706"},"PeriodicalIF":6.5000,"publicationDate":"2025-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0925231225023781","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Facial attribute editing in generative adversarial networks (GANs) involves two essential objectives: (1) accurately modifying the desired facial attribute, and (2) avoiding the unintended modification of irrelevant facial attributes. To address these challenges, we propose FaceDisentGAN, a novel generative framework for disentangled facial attribute manipulation. Specifically, we introduce: (1) a disentanglement module that decomposes feature maps into orthogonal spatial components (vertical and horizontal) to isolate target-related and unrelated semantics; (2) a two-stage training strategy that first learns general facial representations and then refines them to balance generic feature learning with fine-grained detail preservation; and (3) two novel evaluation metrics—Overall Preservation Score (OPS) and Perfect Match Rate (PMR)—which measure, respectively, the average preservation of non-target attributes and the proportion of perfectly disentangled results. This combination provides both soft and strict assessments of disentanglement quality. Extensive experiments demonstrate that FaceDisentGAN achieves accurate target attribute editing while effectively minimizing feature entanglement, outperforming several existing methods in both visual fidelity and semantic control.
期刊介绍:
Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.