Yinwei Wu, Xianpan Zhou, Bing Ma, Xuefeng Su, Kai Ma, Xinchao Wang
{"title":"IFAdapter:基于文本到图像生成的实例特征控制","authors":"Yinwei Wu, Xianpan Zhou, Bing Ma, Xuefeng Su, Kai Ma, Xinchao Wang","doi":"arxiv-2409.08240","DOIUrl":null,"url":null,"abstract":"While Text-to-Image (T2I) diffusion models excel at generating visually\nappealing images of individual instances, they struggle to accurately position\nand control the features generation of multiple instances. The Layout-to-Image\n(L2I) task was introduced to address the positioning challenges by\nincorporating bounding boxes as spatial control signals, but it still falls\nshort in generating precise instance features. In response, we propose the\nInstance Feature Generation (IFG) task, which aims to ensure both positional\naccuracy and feature fidelity in generated instances. To address the IFG task,\nwe introduce the Instance Feature Adapter (IFAdapter). The IFAdapter enhances\nfeature depiction by incorporating additional appearance tokens and utilizing\nan Instance Semantic Map to align instance-level features with spatial\nlocations. The IFAdapter guides the diffusion process as a plug-and-play\nmodule, making it adaptable to various community models. For evaluation, we\ncontribute an IFG benchmark and develop a verification pipeline to objectively\ncompare models' abilities to generate instances with accurate positioning and\nfeatures. Experimental results demonstrate that IFAdapter outperforms other\nmodels in both quantitative and qualitative evaluations.","PeriodicalId":501130,"journal":{"name":"arXiv - CS - Computer Vision and Pattern Recognition","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"IFAdapter: Instance Feature Control for Grounded Text-to-Image Generation\",\"authors\":\"Yinwei Wu, Xianpan Zhou, Bing Ma, Xuefeng Su, Kai Ma, Xinchao Wang\",\"doi\":\"arxiv-2409.08240\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"While Text-to-Image (T2I) diffusion models excel at generating visually\\nappealing images of individual instances, they struggle to accurately position\\nand control the features generation of multiple instances. The Layout-to-Image\\n(L2I) task was introduced to address the positioning challenges by\\nincorporating bounding boxes as spatial control signals, but it still falls\\nshort in generating precise instance features. In response, we propose the\\nInstance Feature Generation (IFG) task, which aims to ensure both positional\\naccuracy and feature fidelity in generated instances. To address the IFG task,\\nwe introduce the Instance Feature Adapter (IFAdapter). The IFAdapter enhances\\nfeature depiction by incorporating additional appearance tokens and utilizing\\nan Instance Semantic Map to align instance-level features with spatial\\nlocations. The IFAdapter guides the diffusion process as a plug-and-play\\nmodule, making it adaptable to various community models. For evaluation, we\\ncontribute an IFG benchmark and develop a verification pipeline to objectively\\ncompare models' abilities to generate instances with accurate positioning and\\nfeatures. Experimental results demonstrate that IFAdapter outperforms other\\nmodels in both quantitative and qualitative evaluations.\",\"PeriodicalId\":501130,\"journal\":{\"name\":\"arXiv - CS - Computer Vision and Pattern Recognition\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Computer Vision and Pattern Recognition\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.08240\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Computer Vision and Pattern Recognition","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.08240","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
IFAdapter: Instance Feature Control for Grounded Text-to-Image Generation
While Text-to-Image (T2I) diffusion models excel at generating visually
appealing images of individual instances, they struggle to accurately position
and control the features generation of multiple instances. The Layout-to-Image
(L2I) task was introduced to address the positioning challenges by
incorporating bounding boxes as spatial control signals, but it still falls
short in generating precise instance features. In response, we propose the
Instance Feature Generation (IFG) task, which aims to ensure both positional
accuracy and feature fidelity in generated instances. To address the IFG task,
we introduce the Instance Feature Adapter (IFAdapter). The IFAdapter enhances
feature depiction by incorporating additional appearance tokens and utilizing
an Instance Semantic Map to align instance-level features with spatial
locations. The IFAdapter guides the diffusion process as a plug-and-play
module, making it adaptable to various community models. For evaluation, we
contribute an IFG benchmark and develop a verification pipeline to objectively
compare models' abilities to generate instances with accurate positioning and
features. Experimental results demonstrate that IFAdapter outperforms other
models in both quantitative and qualitative evaluations.