Yifu Chen, Jingwen Chen, Yingwei Pan, Yehao Li, Ting Yao, Zhineng Chen, Tao Mei
{"title":"Improving Text-guided Object Inpainting with Semantic Pre-inpainting","authors":"Yifu Chen, Jingwen Chen, Yingwei Pan, Yehao Li, Ting Yao, Zhineng Chen, Tao Mei","doi":"arxiv-2409.08260","DOIUrl":null,"url":null,"abstract":"Recent years have witnessed the success of large text-to-image diffusion\nmodels and their remarkable potential to generate high-quality images. The\nfurther pursuit of enhancing the editability of images has sparked significant\ninterest in the downstream task of inpainting a novel object described by a\ntext prompt within a designated region in the image. Nevertheless, the problem\nis not trivial from two aspects: 1) Solely relying on one single U-Net to align\ntext prompt and visual object across all the denoising timesteps is\ninsufficient to generate desired objects; 2) The controllability of object\ngeneration is not guaranteed in the intricate sampling space of diffusion\nmodel. In this paper, we propose to decompose the typical single-stage object\ninpainting into two cascaded processes: 1) semantic pre-inpainting that infers\nthe semantic features of desired objects in a multi-modal feature space; 2)\nhigh-fieldity object generation in diffusion latent space that pivots on such\ninpainted semantic features. To achieve this, we cascade a Transformer-based\nsemantic inpainter and an object inpainting diffusion model, leading to a novel\nCAscaded Transformer-Diffusion (CAT-Diffusion) framework for text-guided object\ninpainting. Technically, the semantic inpainter is trained to predict the\nsemantic features of the target object conditioning on unmasked context and\ntext prompt. The outputs of the semantic inpainter then act as the informative\nvisual prompts to guide high-fieldity object generation through a reference\nadapter layer, leading to controllable object inpainting. Extensive evaluations\non OpenImages-V6 and MSCOCO validate the superiority of CAT-Diffusion against\nthe state-of-the-art methods. Code is available at\n\\url{https://github.com/Nnn-s/CATdiffusion}.","PeriodicalId":501480,"journal":{"name":"arXiv - CS - Multimedia","volume":"26 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Multimedia","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.08260","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Recent years have witnessed the success of large text-to-image diffusion
models and their remarkable potential to generate high-quality images. The
further pursuit of enhancing the editability of images has sparked significant
interest in the downstream task of inpainting a novel object described by a
text prompt within a designated region in the image. Nevertheless, the problem
is not trivial from two aspects: 1) Solely relying on one single U-Net to align
text prompt and visual object across all the denoising timesteps is
insufficient to generate desired objects; 2) The controllability of object
generation is not guaranteed in the intricate sampling space of diffusion
model. In this paper, we propose to decompose the typical single-stage object
inpainting into two cascaded processes: 1) semantic pre-inpainting that infers
the semantic features of desired objects in a multi-modal feature space; 2)
high-fieldity object generation in diffusion latent space that pivots on such
inpainted semantic features. To achieve this, we cascade a Transformer-based
semantic inpainter and an object inpainting diffusion model, leading to a novel
CAscaded Transformer-Diffusion (CAT-Diffusion) framework for text-guided object
inpainting. Technically, the semantic inpainter is trained to predict the
semantic features of the target object conditioning on unmasked context and
text prompt. The outputs of the semantic inpainter then act as the informative
visual prompts to guide high-fieldity object generation through a reference
adapter layer, leading to controllable object inpainting. Extensive evaluations
on OpenImages-V6 and MSCOCO validate the superiority of CAT-Diffusion against
the state-of-the-art methods. Code is available at
\url{https://github.com/Nnn-s/CATdiffusion}.