{"title":"Mitigating inappropriate concepts in text-to-image generation with attention-guided Image editing.","authors":"Jiyeon Oh, Jae-Yeop Jeong, Yeong-Gi Hong, Jin-Woo Jeong","doi":"10.7717/peerj-cs.3170","DOIUrl":null,"url":null,"abstract":"<p><p>Text-to-image generative models have recently garnered a significant surge due to their ability to produce diverse images based on given text prompts. However, concerns regarding the occasional generation of inappropriate, offensive, or explicit content have arisen. To address this, we propose a simple yet effective method that leverages attention map to selectively suppress inappropriate concepts during image generation. Unlike existing approaches that often sacrifice original image context or demand substantial computational overhead, our method preserves image integrity without requiring additional model training or extensive engineering effort. To evaluate our method, we conducted comprehensive quantitative assessments on inappropriateness reduction, text fidelity, image consistency, and computational cost, alongside an online human perceptual study involving 20 participants. The results from our statistical analysis demonstrated that our method effectively removes inappropriate content while preserving the integrity of the original images with high computational efficiency.</p>","PeriodicalId":54224,"journal":{"name":"PeerJ Computer Science","volume":"11 ","pages":"e3170"},"PeriodicalIF":2.5000,"publicationDate":"2025-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12453712/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"PeerJ Computer Science","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.7717/peerj-cs.3170","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Text-to-image generative models have recently garnered a significant surge due to their ability to produce diverse images based on given text prompts. However, concerns regarding the occasional generation of inappropriate, offensive, or explicit content have arisen. To address this, we propose a simple yet effective method that leverages attention map to selectively suppress inappropriate concepts during image generation. Unlike existing approaches that often sacrifice original image context or demand substantial computational overhead, our method preserves image integrity without requiring additional model training or extensive engineering effort. To evaluate our method, we conducted comprehensive quantitative assessments on inappropriateness reduction, text fidelity, image consistency, and computational cost, alongside an online human perceptual study involving 20 participants. The results from our statistical analysis demonstrated that our method effectively removes inappropriate content while preserving the integrity of the original images with high computational efficiency.
期刊介绍:
PeerJ Computer Science is the new open access journal covering all subject areas in computer science, with the backing of a prestigious advisory board and more than 300 academic editors.