{"title":"MagicStyle:基于参考图像的肖像风格化","authors":"Zhaoli Deng, Kaibin Zhou, Fanyi Wang, Zhenpeng Mi","doi":"arxiv-2409.08156","DOIUrl":null,"url":null,"abstract":"The development of diffusion models has significantly advanced the research\non image stylization, particularly in the area of stylizing a content image\nbased on a given style image, which has attracted many scholars. The main\nchallenge in this reference image stylization task lies in how to maintain the\ndetails of the content image while incorporating the color and texture features\nof the style image. This challenge becomes even more pronounced when the\ncontent image is a portrait which has complex textural details. To address this\nchallenge, we propose a diffusion model-based reference image stylization\nmethod specifically for portraits, called MagicStyle. MagicStyle consists of\ntwo phases: Content and Style DDIM Inversion (CSDI) and Feature Fusion Forward\n(FFF). The CSDI phase involves a reverse denoising process, where DDIM\nInversion is performed separately on the content image and the style image,\nstoring the self-attention query, key and value features of both images during\nthe inversion process. The FFF phase executes forward denoising, harmoniously\nintegrating the texture and color information from the pre-stored feature\nqueries, keys and values into the diffusion generation process based on our\nWell-designed Feature Fusion Attention (FFA). We conducted comprehensive\ncomparative and ablation experiments to validate the effectiveness of our\nproposed MagicStyle and FFA.","PeriodicalId":501130,"journal":{"name":"arXiv - CS - Computer Vision and Pattern Recognition","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"MagicStyle: Portrait Stylization Based on Reference Image\",\"authors\":\"Zhaoli Deng, Kaibin Zhou, Fanyi Wang, Zhenpeng Mi\",\"doi\":\"arxiv-2409.08156\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The development of diffusion models has significantly advanced the research\\non image stylization, particularly in the area of stylizing a content image\\nbased on a given style image, which has attracted many scholars. The main\\nchallenge in this reference image stylization task lies in how to maintain the\\ndetails of the content image while incorporating the color and texture features\\nof the style image. This challenge becomes even more pronounced when the\\ncontent image is a portrait which has complex textural details. To address this\\nchallenge, we propose a diffusion model-based reference image stylization\\nmethod specifically for portraits, called MagicStyle. MagicStyle consists of\\ntwo phases: Content and Style DDIM Inversion (CSDI) and Feature Fusion Forward\\n(FFF). The CSDI phase involves a reverse denoising process, where DDIM\\nInversion is performed separately on the content image and the style image,\\nstoring the self-attention query, key and value features of both images during\\nthe inversion process. The FFF phase executes forward denoising, harmoniously\\nintegrating the texture and color information from the pre-stored feature\\nqueries, keys and values into the diffusion generation process based on our\\nWell-designed Feature Fusion Attention (FFA). We conducted comprehensive\\ncomparative and ablation experiments to validate the effectiveness of our\\nproposed MagicStyle and FFA.\",\"PeriodicalId\":501130,\"journal\":{\"name\":\"arXiv - CS - Computer Vision and Pattern Recognition\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Computer Vision and Pattern Recognition\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.08156\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Computer Vision and Pattern Recognition","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.08156","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
摘要
扩散模型的发展极大地推动了图像风格化的研究,尤其是在基于给定风格图像的内容图像风格化领域,吸引了众多学者的关注。这种参考图像风格化任务的主要挑战在于如何在保持内容图像细节的同时融入风格图像的色彩和纹理特征。当内容图像是具有复杂纹理细节的肖像时,这一挑战就更加突出。为了解决这一难题,我们提出了一种基于扩散模型的参考图像风格化方法,专门用于人像,称为 MagicStyle。MagicStyle 包括两个阶段:内容与风格 DDIM 反转(CSDI)和特征前向融合(FFF)。CSDI 阶段涉及反向去噪过程,即分别对内容图像和风格图像进行 DDIM 反演,在反演过程中存储两幅图像的自注意力查询、关键特征和值特征。FFF 阶段执行前向去噪,根据我们精心设计的特征融合注意力(FFA),将预先存储的特征查询、键和值中的纹理和颜色信息和谐地整合到扩散生成过程中。我们进行了全面的比较和消融实验,以验证我们提出的 MagicStyle 和 FFA 的有效性。
MagicStyle: Portrait Stylization Based on Reference Image
The development of diffusion models has significantly advanced the research
on image stylization, particularly in the area of stylizing a content image
based on a given style image, which has attracted many scholars. The main
challenge in this reference image stylization task lies in how to maintain the
details of the content image while incorporating the color and texture features
of the style image. This challenge becomes even more pronounced when the
content image is a portrait which has complex textural details. To address this
challenge, we propose a diffusion model-based reference image stylization
method specifically for portraits, called MagicStyle. MagicStyle consists of
two phases: Content and Style DDIM Inversion (CSDI) and Feature Fusion Forward
(FFF). The CSDI phase involves a reverse denoising process, where DDIM
Inversion is performed separately on the content image and the style image,
storing the self-attention query, key and value features of both images during
the inversion process. The FFF phase executes forward denoising, harmoniously
integrating the texture and color information from the pre-stored feature
queries, keys and values into the diffusion generation process based on our
Well-designed Feature Fusion Attention (FFA). We conducted comprehensive
comparative and ablation experiments to validate the effectiveness of our
proposed MagicStyle and FFA.