Yuzhu Cai, Sheng Yin, Yuxi Wei, Chenxin Xu, Weibo Mao, Felix Juefei-Xu, Siheng Chen, Yanfeng Wang
{"title":"Ethical-Lens: Curbing malicious usages of open-source text-to-image models.","authors":"Yuzhu Cai, Sheng Yin, Yuxi Wei, Chenxin Xu, Weibo Mao, Felix Juefei-Xu, Siheng Chen, Yanfeng Wang","doi":"10.1016/j.patter.2025.101187","DOIUrl":null,"url":null,"abstract":"<p><p>The burgeoning landscape of text-to-image models, exemplified by innovations such as Midjourney and DALL·E 3, has revolutionized content creation across diverse sectors. However, these advances bring forth critical ethical concerns, particularly with the misuse of open-source models to generate content that violates societal norms. Addressing this, we introduce Ethical-Lens, a framework designed to facilitate the value-aligned usage of text-to-image tools without necessitating internal model revision. Ethical-Lens ensures value alignment in text-to-image models across toxicity and bias dimensions by refining user commands and rectifying model outputs. Systematic evaluation metrics, combining GPT4-V, HEIM, and FairFace scores, assess alignment capability. Our experiments reveal that Ethical-Lens enhances alignment capabilities to levels comparable with or superior to commercial models such as DALL <math><mrow><mo>·</mo></mrow> </math> E 3, while preserving the quality of generated images. This study indicates the potential of Ethical-Lens to promote the sustainable development of open-source text-to-image tools and their beneficial integration into society.</p>","PeriodicalId":36242,"journal":{"name":"Patterns","volume":"6 3","pages":"101187"},"PeriodicalIF":6.7000,"publicationDate":"2025-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11963081/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Patterns","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1016/j.patter.2025.101187","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/3/14 0:00:00","PubModel":"eCollection","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
The burgeoning landscape of text-to-image models, exemplified by innovations such as Midjourney and DALL·E 3, has revolutionized content creation across diverse sectors. However, these advances bring forth critical ethical concerns, particularly with the misuse of open-source models to generate content that violates societal norms. Addressing this, we introduce Ethical-Lens, a framework designed to facilitate the value-aligned usage of text-to-image tools without necessitating internal model revision. Ethical-Lens ensures value alignment in text-to-image models across toxicity and bias dimensions by refining user commands and rectifying model outputs. Systematic evaluation metrics, combining GPT4-V, HEIM, and FairFace scores, assess alignment capability. Our experiments reveal that Ethical-Lens enhances alignment capabilities to levels comparable with or superior to commercial models such as DALL E 3, while preserving the quality of generated images. This study indicates the potential of Ethical-Lens to promote the sustainable development of open-source text-to-image tools and their beneficial integration into society.