{"title":"StableID: Multimodal learning for stable identity in personalized Text-to-Face generation","authors":"Xueping Wang, Yixuan Gao, Yanan Liu, Feihu Yan, Guangzhe Zhao","doi":"10.1016/j.patrec.2025.02.018","DOIUrl":null,"url":null,"abstract":"<div><div>Personalized Text-To-Face (TTF) generation aims to inject new subjects (e.g., identity information) into the text-to-image diffusion model, generating images that align with text prompts and maintain subject consistency in different contexts. Currently, some methods usually overfit the reference images on text prompts related to facial attributes, or ignore facial details to fit the text prompts, thus weakening identity consistency. To address these issues, we propose a personalized TTF method for generating <strong>Stable ID</strong>entities without fine-tuning, named StableID. Firstly, multimodal-guided identity constraint is proposed to ensure stability of identity features and preservation of face details, along with semantic editing capabilities. Secondly, we design residual cross-attention based mask balancing loss that effectively separates identity information from non-identity related backgrounds, balancing the effects of text prompts and identity constraints. Furthermore, we develop a portrait dataset with detailed facial prompt, as well as decoupled editable attribute vectors, enabling smooth and precise control over fine-grained semantic edits. Extensive experimental results show that our method outperforms the state-of-the-arts in stable identity consistency.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"190 ","pages":"Pages 153-160"},"PeriodicalIF":3.9000,"publicationDate":"2025-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Pattern Recognition Letters","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0167865525000601","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Personalized Text-To-Face (TTF) generation aims to inject new subjects (e.g., identity information) into the text-to-image diffusion model, generating images that align with text prompts and maintain subject consistency in different contexts. Currently, some methods usually overfit the reference images on text prompts related to facial attributes, or ignore facial details to fit the text prompts, thus weakening identity consistency. To address these issues, we propose a personalized TTF method for generating Stable IDentities without fine-tuning, named StableID. Firstly, multimodal-guided identity constraint is proposed to ensure stability of identity features and preservation of face details, along with semantic editing capabilities. Secondly, we design residual cross-attention based mask balancing loss that effectively separates identity information from non-identity related backgrounds, balancing the effects of text prompts and identity constraints. Furthermore, we develop a portrait dataset with detailed facial prompt, as well as decoupled editable attribute vectors, enabling smooth and precise control over fine-grained semantic edits. Extensive experimental results show that our method outperforms the state-of-the-arts in stable identity consistency.
期刊介绍:
Pattern Recognition Letters aims at rapid publication of concise articles of a broad interest in pattern recognition.
Subject areas include all the current fields of interest represented by the Technical Committees of the International Association of Pattern Recognition, and other developing themes involving learning and recognition.