{"title":"GLEAN: Generative Learning for Eliminating Adversarial Noise","authors":"Justin Lyu Kim, Kyoungwan Woo","doi":"arxiv-2409.10578","DOIUrl":null,"url":null,"abstract":"In the age of powerful diffusion models such as DALL-E and Stable Diffusion,\nmany in the digital art community have suffered style mimicry attacks due to\nfine-tuning these models on their works. The ability to mimic an artist's style\nvia text-to-image diffusion models raises serious ethical issues, especially\nwithout explicit consent. Glaze, a tool that applies various ranges of\nperturbations to digital art, has shown significant success in preventing style\nmimicry attacks, at the cost of artifacts ranging from imperceptible noise to\nsevere quality degradation. The release of Glaze has sparked further\ndiscussions regarding the effectiveness of similar protection methods. In this\npaper, we propose GLEAN- applying I2I generative networks to strip\nperturbations from Glazed images, evaluating the performance of style mimicry\nattacks before and after GLEAN on the results of Glaze. GLEAN aims to support\nand enhance Glaze by highlighting its limitations and encouraging further\ndevelopment.","PeriodicalId":501332,"journal":{"name":"arXiv - CS - Cryptography and Security","volume":"1 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Cryptography and Security","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.10578","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
In the age of powerful diffusion models such as DALL-E and Stable Diffusion,
many in the digital art community have suffered style mimicry attacks due to
fine-tuning these models on their works. The ability to mimic an artist's style
via text-to-image diffusion models raises serious ethical issues, especially
without explicit consent. Glaze, a tool that applies various ranges of
perturbations to digital art, has shown significant success in preventing style
mimicry attacks, at the cost of artifacts ranging from imperceptible noise to
severe quality degradation. The release of Glaze has sparked further
discussions regarding the effectiveness of similar protection methods. In this
paper, we propose GLEAN- applying I2I generative networks to strip
perturbations from Glazed images, evaluating the performance of style mimicry
attacks before and after GLEAN on the results of Glaze. GLEAN aims to support
and enhance Glaze by highlighting its limitations and encouraging further
development.