文本到图像扩散模型的迭代对象计数优化

Oz Zafar, Lior Wolf, Idan Schwartz
{"title":"文本到图像扩散模型的迭代对象计数优化","authors":"Oz Zafar, Lior Wolf, Idan Schwartz","doi":"arxiv-2408.11721","DOIUrl":null,"url":null,"abstract":"We address a persistent challenge in text-to-image models: accurately\ngenerating a specified number of objects. Current models, which learn from\nimage-text pairs, inherently struggle with counting, as training data cannot\ndepict every possible number of objects for any given object. To solve this, we\npropose optimizing the generated image based on a counting loss derived from a\ncounting model that aggregates an object\\'s potential. Employing an\nout-of-the-box counting model is challenging for two reasons: first, the model\nrequires a scaling hyperparameter for the potential aggregation that varies\ndepending on the viewpoint of the objects, and second, classifier guidance\ntechniques require modified models that operate on noisy intermediate diffusion\nsteps. To address these challenges, we propose an iterated online training mode\nthat improves the accuracy of inferred images while altering the text\nconditioning embedding and dynamically adjusting hyperparameters. Our method\noffers three key advantages: (i) it can consider non-derivable counting\ntechniques based on detection models, (ii) it is a zero-shot plug-and-play\nsolution facilitating rapid changes to the counting techniques and image\ngeneration methods, and (iii) the optimized counting token can be reused to\ngenerate accurate images without additional optimization. We evaluate the\ngeneration of various objects and show significant improvements in accuracy.\nThe project page is available at https://ozzafar.github.io/count_token.","PeriodicalId":501174,"journal":{"name":"arXiv - CS - Graphics","volume":"60 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Iterative Object Count Optimization for Text-to-image Diffusion Models\",\"authors\":\"Oz Zafar, Lior Wolf, Idan Schwartz\",\"doi\":\"arxiv-2408.11721\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We address a persistent challenge in text-to-image models: accurately\\ngenerating a specified number of objects. Current models, which learn from\\nimage-text pairs, inherently struggle with counting, as training data cannot\\ndepict every possible number of objects for any given object. To solve this, we\\npropose optimizing the generated image based on a counting loss derived from a\\ncounting model that aggregates an object\\\\'s potential. Employing an\\nout-of-the-box counting model is challenging for two reasons: first, the model\\nrequires a scaling hyperparameter for the potential aggregation that varies\\ndepending on the viewpoint of the objects, and second, classifier guidance\\ntechniques require modified models that operate on noisy intermediate diffusion\\nsteps. To address these challenges, we propose an iterated online training mode\\nthat improves the accuracy of inferred images while altering the text\\nconditioning embedding and dynamically adjusting hyperparameters. Our method\\noffers three key advantages: (i) it can consider non-derivable counting\\ntechniques based on detection models, (ii) it is a zero-shot plug-and-play\\nsolution facilitating rapid changes to the counting techniques and image\\ngeneration methods, and (iii) the optimized counting token can be reused to\\ngenerate accurate images without additional optimization. We evaluate the\\ngeneration of various objects and show significant improvements in accuracy.\\nThe project page is available at https://ozzafar.github.io/count_token.\",\"PeriodicalId\":501174,\"journal\":{\"name\":\"arXiv - CS - Graphics\",\"volume\":\"60 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-08-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Graphics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2408.11721\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Graphics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.11721","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

我们解决了文本到图像模型中的一个长期难题:准确生成指定数量的对象。当前的模型是从图像-文本对中学习的,在计数方面存在固有的困难,因为训练数据无法描述任何给定对象的所有可能数量。为了解决这个问题,我们建议根据从计算模型中得出的计数损失来优化生成的图像,该模型汇总了对象的潜力。采用开箱即用的计数模型具有挑战性,原因有二:首先,该模型需要一个根据物体视角而变化的势能聚合缩放超参数;其次,分类器指导技术需要对噪声中间扩散步骤进行操作的修正模型。为了应对这些挑战,我们提出了一种迭代在线训练模式,在改变文本条件嵌入和动态调整超参数的同时,提高推断图像的准确性。我们的方法有三个主要优势:(i) 它可以考虑基于检测模型的不可减损计数技术;(ii) 它是一种零点即插即用的解决方案,便于快速更改计数技术和图像生成方法;(iii) 优化后的计数标记可以重复用于生成准确的图像,无需额外优化。我们对各种对象的生成进行了评估,结果表明精确度有了显著提高。项目页面见 https://ozzafar.github.io/count_token。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Iterative Object Count Optimization for Text-to-image Diffusion Models
We address a persistent challenge in text-to-image models: accurately generating a specified number of objects. Current models, which learn from image-text pairs, inherently struggle with counting, as training data cannot depict every possible number of objects for any given object. To solve this, we propose optimizing the generated image based on a counting loss derived from a counting model that aggregates an object\'s potential. Employing an out-of-the-box counting model is challenging for two reasons: first, the model requires a scaling hyperparameter for the potential aggregation that varies depending on the viewpoint of the objects, and second, classifier guidance techniques require modified models that operate on noisy intermediate diffusion steps. To address these challenges, we propose an iterated online training mode that improves the accuracy of inferred images while altering the text conditioning embedding and dynamically adjusting hyperparameters. Our method offers three key advantages: (i) it can consider non-derivable counting techniques based on detection models, (ii) it is a zero-shot plug-and-play solution facilitating rapid changes to the counting techniques and image generation methods, and (iii) the optimized counting token can be reused to generate accurate images without additional optimization. We evaluate the generation of various objects and show significant improvements in accuracy. The project page is available at https://ozzafar.github.io/count_token.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信