{"title":"前知觉和跨知觉协同过程的一致性图像绘制。","authors":"Yongle Zhang,Yimin Liu,Hao Fan,Ruotong Hu,Jian Zhang,Qiang Wu","doi":"10.1109/tip.2025.3622071","DOIUrl":null,"url":null,"abstract":"It has been proven that introducing multiple guidance sources boosts image inpainting performance. However, existing methods primarily focus on local relationships and neglect the holistic interplay between guidance and texture information. Moreover, they lack an effective feedback mechanism to adaptively update the guidance process as corrupted texture information is progressively restored, potentially resulting in inconsistent inpainting. To tackle this issue, we propose a novel scheme aligned with pre-perception and cross-perception collaborative processes in human drawing. To mimic the pre-perception process, we introduce a pre-perceptual transformer block that captures long-range contextual dependencies and activates meaningful information to individually optimize image structures, semantic layouts, and textures, thereby effectively controlling their respective generation. To mimic the cross-perception collaborative process, we propose a cyclic cross-perceptual interaction to maintain consistency across the entire image regarding structure, layout, and texture while progressively refining their details. This interaction accounts for the global attention relationship between texture and other guidance sources (including image structure and semantic layout) to enhance image texture, alongside integrating a dedicated feedback mechanism to update guidance information. The proposed components are alternately deployed in three-branch decoders of the new scheme from rough to fine-grained levels to achieve these two iterative processes of human drawing. Experimental results prove the superiority of the proposed scheme over state-of-the-art methods across three datasets.","PeriodicalId":13217,"journal":{"name":"IEEE Transactions on Image Processing","volume":"26 1","pages":""},"PeriodicalIF":13.7000,"publicationDate":"2025-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Consistent Image Inpainting with Pre-Perception and Cross-Perception Collaborative Processes.\",\"authors\":\"Yongle Zhang,Yimin Liu,Hao Fan,Ruotong Hu,Jian Zhang,Qiang Wu\",\"doi\":\"10.1109/tip.2025.3622071\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"It has been proven that introducing multiple guidance sources boosts image inpainting performance. However, existing methods primarily focus on local relationships and neglect the holistic interplay between guidance and texture information. Moreover, they lack an effective feedback mechanism to adaptively update the guidance process as corrupted texture information is progressively restored, potentially resulting in inconsistent inpainting. To tackle this issue, we propose a novel scheme aligned with pre-perception and cross-perception collaborative processes in human drawing. To mimic the pre-perception process, we introduce a pre-perceptual transformer block that captures long-range contextual dependencies and activates meaningful information to individually optimize image structures, semantic layouts, and textures, thereby effectively controlling their respective generation. To mimic the cross-perception collaborative process, we propose a cyclic cross-perceptual interaction to maintain consistency across the entire image regarding structure, layout, and texture while progressively refining their details. This interaction accounts for the global attention relationship between texture and other guidance sources (including image structure and semantic layout) to enhance image texture, alongside integrating a dedicated feedback mechanism to update guidance information. The proposed components are alternately deployed in three-branch decoders of the new scheme from rough to fine-grained levels to achieve these two iterative processes of human drawing. Experimental results prove the superiority of the proposed scheme over state-of-the-art methods across three datasets.\",\"PeriodicalId\":13217,\"journal\":{\"name\":\"IEEE Transactions on Image Processing\",\"volume\":\"26 1\",\"pages\":\"\"},\"PeriodicalIF\":13.7000,\"publicationDate\":\"2025-10-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Image Processing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1109/tip.2025.3622071\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Image Processing","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1109/tip.2025.3622071","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Consistent Image Inpainting with Pre-Perception and Cross-Perception Collaborative Processes.
It has been proven that introducing multiple guidance sources boosts image inpainting performance. However, existing methods primarily focus on local relationships and neglect the holistic interplay between guidance and texture information. Moreover, they lack an effective feedback mechanism to adaptively update the guidance process as corrupted texture information is progressively restored, potentially resulting in inconsistent inpainting. To tackle this issue, we propose a novel scheme aligned with pre-perception and cross-perception collaborative processes in human drawing. To mimic the pre-perception process, we introduce a pre-perceptual transformer block that captures long-range contextual dependencies and activates meaningful information to individually optimize image structures, semantic layouts, and textures, thereby effectively controlling their respective generation. To mimic the cross-perception collaborative process, we propose a cyclic cross-perceptual interaction to maintain consistency across the entire image regarding structure, layout, and texture while progressively refining their details. This interaction accounts for the global attention relationship between texture and other guidance sources (including image structure and semantic layout) to enhance image texture, alongside integrating a dedicated feedback mechanism to update guidance information. The proposed components are alternately deployed in three-branch decoders of the new scheme from rough to fine-grained levels to achieve these two iterative processes of human drawing. Experimental results prove the superiority of the proposed scheme over state-of-the-art methods across three datasets.
期刊介绍:
The IEEE Transactions on Image Processing delves into groundbreaking theories, algorithms, and structures concerning the generation, acquisition, manipulation, transmission, scrutiny, and presentation of images, video, and multidimensional signals across diverse applications. Topics span mathematical, statistical, and perceptual aspects, encompassing modeling, representation, formation, coding, filtering, enhancement, restoration, rendering, halftoning, search, and analysis of images, video, and multidimensional signals. Pertinent applications range from image and video communications to electronic imaging, biomedical imaging, image and video systems, and remote sensing.