TextBoost:通过微调文本编码器实现文本到图像模型的一次性个性化定制

NaHyeon Park, Kunhee Kim, Hyunjung Shim
{"title":"TextBoost:通过微调文本编码器实现文本到图像模型的一次性个性化定制","authors":"NaHyeon Park, Kunhee Kim, Hyunjung Shim","doi":"arxiv-2409.08248","DOIUrl":null,"url":null,"abstract":"Recent breakthroughs in text-to-image models have opened up promising\nresearch avenues in personalized image generation, enabling users to create\ndiverse images of a specific subject using natural language prompts. However,\nexisting methods often suffer from performance degradation when given only a\nsingle reference image. They tend to overfit the input, producing highly\nsimilar outputs regardless of the text prompt. This paper addresses the\nchallenge of one-shot personalization by mitigating overfitting, enabling the\ncreation of controllable images through text prompts. Specifically, we propose\na selective fine-tuning strategy that focuses on the text encoder. Furthermore,\nwe introduce three key techniques to enhance personalization performance: (1)\naugmentation tokens to encourage feature disentanglement and alleviate\noverfitting, (2) a knowledge-preservation loss to reduce language drift and\npromote generalizability across diverse prompts, and (3) SNR-weighted sampling\nfor efficient training. Extensive experiments demonstrate that our approach\nefficiently generates high-quality, diverse images using only a single\nreference image while significantly reducing memory and storage requirements.","PeriodicalId":501130,"journal":{"name":"arXiv - CS - Computer Vision and Pattern Recognition","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"TextBoost: Towards One-Shot Personalization of Text-to-Image Models via Fine-tuning Text Encoder\",\"authors\":\"NaHyeon Park, Kunhee Kim, Hyunjung Shim\",\"doi\":\"arxiv-2409.08248\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Recent breakthroughs in text-to-image models have opened up promising\\nresearch avenues in personalized image generation, enabling users to create\\ndiverse images of a specific subject using natural language prompts. However,\\nexisting methods often suffer from performance degradation when given only a\\nsingle reference image. They tend to overfit the input, producing highly\\nsimilar outputs regardless of the text prompt. This paper addresses the\\nchallenge of one-shot personalization by mitigating overfitting, enabling the\\ncreation of controllable images through text prompts. Specifically, we propose\\na selective fine-tuning strategy that focuses on the text encoder. Furthermore,\\nwe introduce three key techniques to enhance personalization performance: (1)\\naugmentation tokens to encourage feature disentanglement and alleviate\\noverfitting, (2) a knowledge-preservation loss to reduce language drift and\\npromote generalizability across diverse prompts, and (3) SNR-weighted sampling\\nfor efficient training. Extensive experiments demonstrate that our approach\\nefficiently generates high-quality, diverse images using only a single\\nreference image while significantly reducing memory and storage requirements.\",\"PeriodicalId\":501130,\"journal\":{\"name\":\"arXiv - CS - Computer Vision and Pattern Recognition\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Computer Vision and Pattern Recognition\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.08248\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Computer Vision and Pattern Recognition","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.08248","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

文本到图像模型的最新突破为个性化图像生成开辟了前景广阔的研究途径,使用户能够使用自然语言提示创建特定主题的多种图像。然而,现有的方法在只给出一张参考图像时往往会出现性能下降的问题。它们往往会过度拟合输入,产生高度相似的输出,而与文本提示无关。本文通过减轻过拟合来解决单次个性化的挑战,使通过文本提示创建可控图像成为可能。具体来说,我们提出了一种侧重于文本编码器的选择性微调策略。此外,我们还引入了三项关键技术来提高个性化性能:(1) 增强标记,以鼓励特征分离并减轻过拟合;(2) 知识保留损失,以减少语言漂移并提高不同提示的通用性;(3) SNR 加权采样,以实现高效训练。广泛的实验证明,我们的方法只需使用单个参考图像就能有效生成高质量、多样化的图像,同时大大降低了内存和存储要求。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
TextBoost: Towards One-Shot Personalization of Text-to-Image Models via Fine-tuning Text Encoder
Recent breakthroughs in text-to-image models have opened up promising research avenues in personalized image generation, enabling users to create diverse images of a specific subject using natural language prompts. However, existing methods often suffer from performance degradation when given only a single reference image. They tend to overfit the input, producing highly similar outputs regardless of the text prompt. This paper addresses the challenge of one-shot personalization by mitigating overfitting, enabling the creation of controllable images through text prompts. Specifically, we propose a selective fine-tuning strategy that focuses on the text encoder. Furthermore, we introduce three key techniques to enhance personalization performance: (1) augmentation tokens to encourage feature disentanglement and alleviate overfitting, (2) a knowledge-preservation loss to reduce language drift and promote generalizability across diverse prompts, and (3) SNR-weighted sampling for efficient training. Extensive experiments demonstrate that our approach efficiently generates high-quality, diverse images using only a single reference image while significantly reducing memory and storage requirements.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信