{"title":"Click2Mask:本地编辑与动态蒙版生成","authors":"Omer Regev, Omri Avrahami, Dani Lischinski","doi":"arxiv-2409.08272","DOIUrl":null,"url":null,"abstract":"Recent advancements in generative models have revolutionized image generation\nand editing, making these tasks accessible to non-experts. This paper focuses\non local image editing, particularly the task of adding new content to a\nloosely specified area. Existing methods often require a precise mask or a\ndetailed description of the location, which can be cumbersome and prone to\nerrors. We propose Click2Mask, a novel approach that simplifies the local\nediting process by requiring only a single point of reference (in addition to\nthe content description). A mask is dynamically grown around this point during\na Blended Latent Diffusion (BLD) process, guided by a masked CLIP-based\nsemantic loss. Click2Mask surpasses the limitations of segmentation-based and\nfine-tuning dependent methods, offering a more user-friendly and contextually\naccurate solution. Our experiments demonstrate that Click2Mask not only\nminimizes user effort but also delivers competitive or superior local image\nmanipulation results compared to SoTA methods, according to both human\njudgement and automatic metrics. Key contributions include the simplification\nof user input, the ability to freely add objects unconstrained by existing\nsegments, and the integration potential of our dynamic mask approach within\nother editing methods.","PeriodicalId":501301,"journal":{"name":"arXiv - CS - Machine Learning","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Click2Mask: Local Editing with Dynamic Mask Generation\",\"authors\":\"Omer Regev, Omri Avrahami, Dani Lischinski\",\"doi\":\"arxiv-2409.08272\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Recent advancements in generative models have revolutionized image generation\\nand editing, making these tasks accessible to non-experts. This paper focuses\\non local image editing, particularly the task of adding new content to a\\nloosely specified area. Existing methods often require a precise mask or a\\ndetailed description of the location, which can be cumbersome and prone to\\nerrors. We propose Click2Mask, a novel approach that simplifies the local\\nediting process by requiring only a single point of reference (in addition to\\nthe content description). A mask is dynamically grown around this point during\\na Blended Latent Diffusion (BLD) process, guided by a masked CLIP-based\\nsemantic loss. Click2Mask surpasses the limitations of segmentation-based and\\nfine-tuning dependent methods, offering a more user-friendly and contextually\\naccurate solution. Our experiments demonstrate that Click2Mask not only\\nminimizes user effort but also delivers competitive or superior local image\\nmanipulation results compared to SoTA methods, according to both human\\njudgement and automatic metrics. Key contributions include the simplification\\nof user input, the ability to freely add objects unconstrained by existing\\nsegments, and the integration potential of our dynamic mask approach within\\nother editing methods.\",\"PeriodicalId\":501301,\"journal\":{\"name\":\"arXiv - CS - Machine Learning\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Machine Learning\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.08272\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Machine Learning","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.08272","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
摘要
生成模型的最新进展彻底改变了图像的生成和编辑,使非专业人员也能完成这些任务。本文的重点是局部图像编辑,尤其是在随意指定的区域添加新内容的任务。现有的方法通常需要精确的遮罩或对位置的详细描述,既麻烦又容易出错。我们提出的 Click2Mask 是一种新颖的方法,它只需要一个参考点(除内容描述外),从而简化了本地编辑过程。在混合潜在扩散(BLD)过程中,在基于掩码 CLIP 语义损失的指导下,围绕该点动态生成掩码。Click2Mask 超越了基于分割的方法和依赖于微调的方法的局限性,提供了一种更加用户友好和上下文准确的解决方案。我们的实验证明,与 SoTA 方法相比,Click2Mask 不仅最大限度地减少了用户的工作量,而且根据人工判断和自动指标,它还能提供具有竞争力或更优越的局部图像处理结果。我们的主要贡献包括简化了用户输入,能够不受现有片段的限制自由添加对象,以及我们的动态遮罩方法在其他编辑方法中的整合潜力。
Click2Mask: Local Editing with Dynamic Mask Generation
Recent advancements in generative models have revolutionized image generation
and editing, making these tasks accessible to non-experts. This paper focuses
on local image editing, particularly the task of adding new content to a
loosely specified area. Existing methods often require a precise mask or a
detailed description of the location, which can be cumbersome and prone to
errors. We propose Click2Mask, a novel approach that simplifies the local
editing process by requiring only a single point of reference (in addition to
the content description). A mask is dynamically grown around this point during
a Blended Latent Diffusion (BLD) process, guided by a masked CLIP-based
semantic loss. Click2Mask surpasses the limitations of segmentation-based and
fine-tuning dependent methods, offering a more user-friendly and contextually
accurate solution. Our experiments demonstrate that Click2Mask not only
minimizes user effort but also delivers competitive or superior local image
manipulation results compared to SoTA methods, according to both human
judgement and automatic metrics. Key contributions include the simplification
of user input, the ability to freely add objects unconstrained by existing
segments, and the integration potential of our dynamic mask approach within
other editing methods.