Guoyin Ren, Qidan Guo, Zhijie Yu, Bo Jiang, Gong Li, Dong Li, Xinsong Wang
{"title":"PKDFIN:基于先验知识提取的缺失区域人脸图像补图网络","authors":"Guoyin Ren, Qidan Guo, Zhijie Yu, Bo Jiang, Gong Li, Dong Li, Xinsong Wang","doi":"10.1155/int/6897997","DOIUrl":null,"url":null,"abstract":"<p>Existing facial image inpainting methods demonstrate high reliance on the precision of prior knowledge. However, the acquisition of precise prior knowledge remains challenging, and the incorporation of predicted prior knowledge in the restoration process often leads to error propagation and accumulation, thereby compromising the reconstruction quality. To address this limitation, we propose a novel facial image inpainting framework that leverages knowledge distillation, which is specifically designed to mitigate error propagation caused by imprecise prior knowledge. More specifically, we develop a teacher network incorporating accurate facial prior information and establish a knowledge transfer mechanism between the teacher and student networks via knowledge distillation. During the training phase, the student network progressively acquires the prior information encoded in the teacher network, thus improving its restoration capability for missing or corrupted regions. Additionally, we introduce a Coordinate Attention Gated Convolution (CAG) module, which enables effective extraction of both structural and semantic features from intact regions. Experiments conducted on the public facial datasets (CelebA-HQ and FFHQ) show that our method achieves performance improvements over existing approaches in terms of multiple quantitative evaluation metrics, including PSNR, SSIM, MAE, and LPIPS. Thus, the knowledge transfer from teacher to student network via knowledge distillation significantly reduces the dependence on prior knowledge characteristic of existing methods, facilitating more precise and efficient facial image inpainting.</p>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":"2025 1","pages":""},"PeriodicalIF":3.7000,"publicationDate":"2025-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/int/6897997","citationCount":"0","resultStr":"{\"title\":\"PKDFIN: Prior Knowledge Distillation-Based Face Image Inpainting Network for Missing Regions\",\"authors\":\"Guoyin Ren, Qidan Guo, Zhijie Yu, Bo Jiang, Gong Li, Dong Li, Xinsong Wang\",\"doi\":\"10.1155/int/6897997\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Existing facial image inpainting methods demonstrate high reliance on the precision of prior knowledge. However, the acquisition of precise prior knowledge remains challenging, and the incorporation of predicted prior knowledge in the restoration process often leads to error propagation and accumulation, thereby compromising the reconstruction quality. To address this limitation, we propose a novel facial image inpainting framework that leverages knowledge distillation, which is specifically designed to mitigate error propagation caused by imprecise prior knowledge. More specifically, we develop a teacher network incorporating accurate facial prior information and establish a knowledge transfer mechanism between the teacher and student networks via knowledge distillation. During the training phase, the student network progressively acquires the prior information encoded in the teacher network, thus improving its restoration capability for missing or corrupted regions. Additionally, we introduce a Coordinate Attention Gated Convolution (CAG) module, which enables effective extraction of both structural and semantic features from intact regions. Experiments conducted on the public facial datasets (CelebA-HQ and FFHQ) show that our method achieves performance improvements over existing approaches in terms of multiple quantitative evaluation metrics, including PSNR, SSIM, MAE, and LPIPS. Thus, the knowledge transfer from teacher to student network via knowledge distillation significantly reduces the dependence on prior knowledge characteristic of existing methods, facilitating more precise and efficient facial image inpainting.</p>\",\"PeriodicalId\":14089,\"journal\":{\"name\":\"International Journal of Intelligent Systems\",\"volume\":\"2025 1\",\"pages\":\"\"},\"PeriodicalIF\":3.7000,\"publicationDate\":\"2025-09-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1155/int/6897997\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Intelligent Systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1155/int/6897997\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Intelligent Systems","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1155/int/6897997","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
PKDFIN: Prior Knowledge Distillation-Based Face Image Inpainting Network for Missing Regions
Existing facial image inpainting methods demonstrate high reliance on the precision of prior knowledge. However, the acquisition of precise prior knowledge remains challenging, and the incorporation of predicted prior knowledge in the restoration process often leads to error propagation and accumulation, thereby compromising the reconstruction quality. To address this limitation, we propose a novel facial image inpainting framework that leverages knowledge distillation, which is specifically designed to mitigate error propagation caused by imprecise prior knowledge. More specifically, we develop a teacher network incorporating accurate facial prior information and establish a knowledge transfer mechanism between the teacher and student networks via knowledge distillation. During the training phase, the student network progressively acquires the prior information encoded in the teacher network, thus improving its restoration capability for missing or corrupted regions. Additionally, we introduce a Coordinate Attention Gated Convolution (CAG) module, which enables effective extraction of both structural and semantic features from intact regions. Experiments conducted on the public facial datasets (CelebA-HQ and FFHQ) show that our method achieves performance improvements over existing approaches in terms of multiple quantitative evaluation metrics, including PSNR, SSIM, MAE, and LPIPS. Thus, the knowledge transfer from teacher to student network via knowledge distillation significantly reduces the dependence on prior knowledge characteristic of existing methods, facilitating more precise and efficient facial image inpainting.
期刊介绍:
The International Journal of Intelligent Systems serves as a forum for individuals interested in tapping into the vast theories based on intelligent systems construction. With its peer-reviewed format, the journal explores several fascinating editorials written by today''s experts in the field. Because new developments are being introduced each day, there''s much to be learned — examination, analysis creation, information retrieval, man–computer interactions, and more. The International Journal of Intelligent Systems uses charts and illustrations to demonstrate these ground-breaking issues, and encourages readers to share their thoughts and experiences.