Zhengyi Gong, Mingwen Shao, Chang Liu, Xiang Lv, Huan Liu
{"title":"FCAT-Diff: Flexible and Consistent Appearance Transfer Based on Training-free Diffusion Model","authors":"Zhengyi Gong, Mingwen Shao, Chang Liu, Xiang Lv, Huan Liu","doi":"10.1016/j.cag.2025.104247","DOIUrl":null,"url":null,"abstract":"<div><div>The core goal of appearance transfer is to seamlessly integrate the appearance of a reference image into a content image. However, existing methods operate on the entire image and fail to accurately identify the regions of interest for appearance transfer, leading to structural loss and incorrect background transfer. Additionally, these methods lack flexibility, making it difficult to achieve fine-grained control at the regional level. To address these issues, we propose <strong>FCAT-Diff</strong>, a training-free framework for flexible and consistent appearance transfer without additional training or fine-tuning. Specifically, to achieve more consistent appearance transfer, we employ a dual-guidance branch to provide structure and appearance features, which are fused through an enhanced self-attention module called <strong>Mask-Appearance-Attention (MAA)</strong>. The MAA clearly distinguishes the boundaries between the background and the transferred region, ensuring consistency in both the structure and background. To increase the flexibility of transfer, we utilize a mask that allows users to select the regions of interest for transfer, enabling appearance transfer for specified regions. Furthermore, given multiple reference images and their corresponding regions, our FCAT-Diff supports the transfer of multiple appearances. Extensive experiments demonstrate that our method achieves <strong>state-of-the-art (SOTA)</strong> performance in maintaining the structural and background consistency of the content image while providing greater flexibility.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"130 ","pages":"Article 104247"},"PeriodicalIF":2.8000,"publicationDate":"2025-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers & Graphics-Uk","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0097849325000883","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
引用次数: 0
Abstract
The core goal of appearance transfer is to seamlessly integrate the appearance of a reference image into a content image. However, existing methods operate on the entire image and fail to accurately identify the regions of interest for appearance transfer, leading to structural loss and incorrect background transfer. Additionally, these methods lack flexibility, making it difficult to achieve fine-grained control at the regional level. To address these issues, we propose FCAT-Diff, a training-free framework for flexible and consistent appearance transfer without additional training or fine-tuning. Specifically, to achieve more consistent appearance transfer, we employ a dual-guidance branch to provide structure and appearance features, which are fused through an enhanced self-attention module called Mask-Appearance-Attention (MAA). The MAA clearly distinguishes the boundaries between the background and the transferred region, ensuring consistency in both the structure and background. To increase the flexibility of transfer, we utilize a mask that allows users to select the regions of interest for transfer, enabling appearance transfer for specified regions. Furthermore, given multiple reference images and their corresponding regions, our FCAT-Diff supports the transfer of multiple appearances. Extensive experiments demonstrate that our method achieves state-of-the-art (SOTA) performance in maintaining the structural and background consistency of the content image while providing greater flexibility.
期刊介绍:
Computers & Graphics is dedicated to disseminate information on research and applications of computer graphics (CG) techniques. The journal encourages articles on:
1. Research and applications of interactive computer graphics. We are particularly interested in novel interaction techniques and applications of CG to problem domains.
2. State-of-the-art papers on late-breaking, cutting-edge research on CG.
3. Information on innovative uses of graphics principles and technologies.
4. Tutorial papers on both teaching CG principles and innovative uses of CG in education.