{"title":"High-Frequency Anti-DreamBooth: Robust Defense Against Image Synthesis","authors":"Takuto Onikubo, Yusuke Matsui","doi":"arxiv-2409.08167","DOIUrl":null,"url":null,"abstract":"Recently, text-to-image generative models have been misused to create\nunauthorized malicious images of individuals, posing a growing social problem.\nPrevious solutions, such as Anti-DreamBooth, add adversarial noise to images to\nprotect them from being used as training data for malicious generation.\nHowever, we found that the adversarial noise can be removed by adversarial\npurification methods such as DiffPure. Therefore, we propose a new adversarial\nattack method that adds strong perturbation on the high-frequency areas of\nimages to make it more robust to adversarial purification. Our experiment\nshowed that the adversarial images retained noise even after adversarial\npurification, hindering malicious image generation.","PeriodicalId":501130,"journal":{"name":"arXiv - CS - Computer Vision and Pattern Recognition","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Computer Vision and Pattern Recognition","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.08167","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Recently, text-to-image generative models have been misused to create
unauthorized malicious images of individuals, posing a growing social problem.
Previous solutions, such as Anti-DreamBooth, add adversarial noise to images to
protect them from being used as training data for malicious generation.
However, we found that the adversarial noise can be removed by adversarial
purification methods such as DiffPure. Therefore, we propose a new adversarial
attack method that adds strong perturbation on the high-frequency areas of
images to make it more robust to adversarial purification. Our experiment
showed that the adversarial images retained noise even after adversarial
purification, hindering malicious image generation.