Xiucheng Dong, Yaling Ju, Dangcheng Zhang, Bing Hou, Jinqing He
{"title":"基于分层解码网络的大孔缺失图像的高效引导补图","authors":"Xiucheng Dong, Yaling Ju, Dangcheng Zhang, Bing Hou, Jinqing He","doi":"10.1007/s40747-024-01686-8","DOIUrl":null,"url":null,"abstract":"<p>When dealing with images containing large hole-missing regions, deep learning-based image inpainting algorithms often face challenges such as local structural distortions and blurriness. In this paper, a novel hierarchical decoding network for image inpainting is proposed. Firstly, the structural priors extracted from the encoding layer are utilized to guide the first decoding layer, while residual blocks are employed to extract deep-level image features. Secondly, multiple hierarchical decoding layers progressively fill in the missing regions from top to bottom, then interlayer features and gradient priors are used to guide information transfer between layers. Furthermore, a proposed Multi-dimensional Efficient Attention is introduced for feature fusion, enabling more effective extraction of image features across different dimensions compared to conventional methods. Finally, Efficient Context Fusion combines the reconstructed feature maps from different decoding layers into the image space, preserving the semantic integrity of the output image. Experiments have been conducted to validate the effectiveness of the proposed method, demonstrating superior performance in both subjective and objective evaluations. When inpainting images with missing regions ranging from 50% to 60%, the proposed method achieves improvements of 0.02 dB (0.22 dB) and 0.001 (0.003) in PSNR and SSIM, on the CelebA-HQ (Places2) dataset, respectively.</p>","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"50 1","pages":""},"PeriodicalIF":5.0000,"publicationDate":"2025-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Efficient guided inpainting of larger hole missing images based on hierarchical decoding network\",\"authors\":\"Xiucheng Dong, Yaling Ju, Dangcheng Zhang, Bing Hou, Jinqing He\",\"doi\":\"10.1007/s40747-024-01686-8\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>When dealing with images containing large hole-missing regions, deep learning-based image inpainting algorithms often face challenges such as local structural distortions and blurriness. In this paper, a novel hierarchical decoding network for image inpainting is proposed. Firstly, the structural priors extracted from the encoding layer are utilized to guide the first decoding layer, while residual blocks are employed to extract deep-level image features. Secondly, multiple hierarchical decoding layers progressively fill in the missing regions from top to bottom, then interlayer features and gradient priors are used to guide information transfer between layers. Furthermore, a proposed Multi-dimensional Efficient Attention is introduced for feature fusion, enabling more effective extraction of image features across different dimensions compared to conventional methods. Finally, Efficient Context Fusion combines the reconstructed feature maps from different decoding layers into the image space, preserving the semantic integrity of the output image. Experiments have been conducted to validate the effectiveness of the proposed method, demonstrating superior performance in both subjective and objective evaluations. When inpainting images with missing regions ranging from 50% to 60%, the proposed method achieves improvements of 0.02 dB (0.22 dB) and 0.001 (0.003) in PSNR and SSIM, on the CelebA-HQ (Places2) dataset, respectively.</p>\",\"PeriodicalId\":10524,\"journal\":{\"name\":\"Complex & Intelligent Systems\",\"volume\":\"50 1\",\"pages\":\"\"},\"PeriodicalIF\":5.0000,\"publicationDate\":\"2025-01-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Complex & Intelligent Systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1007/s40747-024-01686-8\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Complex & Intelligent Systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s40747-024-01686-8","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
摘要
当处理包含大空洞缺失区域的图像时,基于深度学习的图像绘制算法经常面临局部结构扭曲和模糊等挑战。本文提出了一种新的用于图像补图的分层解码网络。首先,利用从编码层提取的结构先验来引导第一解码层,同时利用残差块提取深层图像特征。其次,分层解码层从上到下依次填充缺失区域,利用层间特征和梯度先验引导层间信息传递;此外,本文还提出了一种基于多维高效关注的特征融合方法,与传统方法相比,可以更有效地提取不同维度的图像特征。最后,高效上下文融合将来自不同解码层的重构特征映射合并到图像空间中,保持输出图像的语义完整性。实验验证了该方法的有效性,在主观和客观评价方面都表现出优异的性能。在CelebA-HQ (Places2)数据集上,当缺失区域范围在50% ~ 60%之间时,该方法的PSNR和SSIM分别提高了0.02 dB (0.22 dB)和0.001(0.003)。
Efficient guided inpainting of larger hole missing images based on hierarchical decoding network
When dealing with images containing large hole-missing regions, deep learning-based image inpainting algorithms often face challenges such as local structural distortions and blurriness. In this paper, a novel hierarchical decoding network for image inpainting is proposed. Firstly, the structural priors extracted from the encoding layer are utilized to guide the first decoding layer, while residual blocks are employed to extract deep-level image features. Secondly, multiple hierarchical decoding layers progressively fill in the missing regions from top to bottom, then interlayer features and gradient priors are used to guide information transfer between layers. Furthermore, a proposed Multi-dimensional Efficient Attention is introduced for feature fusion, enabling more effective extraction of image features across different dimensions compared to conventional methods. Finally, Efficient Context Fusion combines the reconstructed feature maps from different decoding layers into the image space, preserving the semantic integrity of the output image. Experiments have been conducted to validate the effectiveness of the proposed method, demonstrating superior performance in both subjective and objective evaluations. When inpainting images with missing regions ranging from 50% to 60%, the proposed method achieves improvements of 0.02 dB (0.22 dB) and 0.001 (0.003) in PSNR and SSIM, on the CelebA-HQ (Places2) dataset, respectively.
期刊介绍:
Complex & Intelligent Systems aims to provide a forum for presenting and discussing novel approaches, tools and techniques meant for attaining a cross-fertilization between the broad fields of complex systems, computational simulation, and intelligent analytics and visualization. The transdisciplinary research that the journal focuses on will expand the boundaries of our understanding by investigating the principles and processes that underlie many of the most profound problems facing society today.