Yan Zhong , Xinping Zhao , Guangzhi Zhao , Bohua Chen , Fei Hao , Ruoyu Zhao , Jiaqi He , Lei Shi , Li Zhang
{"title":"CTD-inpainting: Towards the Coherence of Text-driven Inpainting with Blended Diffusion","authors":"Yan Zhong , Xinping Zhao , Guangzhi Zhao , Bohua Chen , Fei Hao , Ruoyu Zhao , Jiaqi He , Lei Shi , Li Zhang","doi":"10.1016/j.inffus.2025.103163","DOIUrl":null,"url":null,"abstract":"<div><div>Text-driven inpainting has emerged as a prominent and challenging research topic in image completion recently, where denoising diffusion probabilistic models (DDPM)-based approaches have achieved state-of-the-art performance on authentic and diverse images. However, ensuring high image fidelity during generation remains a critical aspect in effective text-driven inpainting. Moreover, guaranteeing coherence between the unmasked region (background) and the generated results in the masked regions poses a significant challenge in measurement and implementation. To address these issues, we propose CTD-Inpainting, a novel text-driven inpainting framework, incorporates a coherence constraint between the masked and unmasked regions. Specifically, CTD-Inpainting employs a pre-trained contrastive language-image model (CLIP) to guide DDPM-based generation, aligning it with the text prompt. Additionally, we introduce a transition region between the background and the masked region via mask expansion. This transition region helps maintain coherence between the foreground and background by ensuring consistency between the generated results and the original background during inpainting. At each denoising step, we employ a blending technique, where multiple noise-injected versions of the input image are harmonized with the latent diffusion guided by text and coherence constraint in the transition region. This enables seamless integration of conditional information with the generated information via resampling. Additionally, we design an innovative coherence metric based on the coherence constraint, providing a quantitative measure for the subjective coherence assessment. Extensive experiments manifest the superiority of CTD-Inpainting against state-of-the-art methods on real-world and diverse images.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"122 ","pages":"Article 103163"},"PeriodicalIF":14.7000,"publicationDate":"2025-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Fusion","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1566253525002362","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Text-driven inpainting has emerged as a prominent and challenging research topic in image completion recently, where denoising diffusion probabilistic models (DDPM)-based approaches have achieved state-of-the-art performance on authentic and diverse images. However, ensuring high image fidelity during generation remains a critical aspect in effective text-driven inpainting. Moreover, guaranteeing coherence between the unmasked region (background) and the generated results in the masked regions poses a significant challenge in measurement and implementation. To address these issues, we propose CTD-Inpainting, a novel text-driven inpainting framework, incorporates a coherence constraint between the masked and unmasked regions. Specifically, CTD-Inpainting employs a pre-trained contrastive language-image model (CLIP) to guide DDPM-based generation, aligning it with the text prompt. Additionally, we introduce a transition region between the background and the masked region via mask expansion. This transition region helps maintain coherence between the foreground and background by ensuring consistency between the generated results and the original background during inpainting. At each denoising step, we employ a blending technique, where multiple noise-injected versions of the input image are harmonized with the latent diffusion guided by text and coherence constraint in the transition region. This enables seamless integration of conditional information with the generated information via resampling. Additionally, we design an innovative coherence metric based on the coherence constraint, providing a quantitative measure for the subjective coherence assessment. Extensive experiments manifest the superiority of CTD-Inpainting against state-of-the-art methods on real-world and diverse images.
期刊介绍:
Information Fusion serves as a central platform for showcasing advancements in multi-sensor, multi-source, multi-process information fusion, fostering collaboration among diverse disciplines driving its progress. It is the leading outlet for sharing research and development in this field, focusing on architectures, algorithms, and applications. Papers dealing with fundamental theoretical analyses as well as those demonstrating their application to real-world problems will be welcome.