Felice Antonio Merra, V. W. Anelli, Tommaso Di Noia, Daniele Malitesta, Alberto Carlo Maria Mancino
{"title":"Denoise to Protect: A Method to Robustify Visual Recommenders from Adversaries","authors":"Felice Antonio Merra, V. W. Anelli, Tommaso Di Noia, Daniele Malitesta, Alberto Carlo Maria Mancino","doi":"10.1145/3539618.3591971","DOIUrl":null,"url":null,"abstract":"While the integration of product images enhances the recommendation performance of visual-based recommender systems (VRSs), this can make the model vulnerable to adversaries that can produce noised images capable to alter the recommendation behavior. Recently, stronger and stronger adversarial attacks have emerged to raise awareness of these risks; however, effective defense methods are still an urgent open challenge. In this work, we propose \"Adversarial Image Denoiser\" (AiD), a novel defense method that cleans up the item images by malicious perturbations. In particular, we design a training strategy whose denoising objective is to minimize both the visual differences between clean and adversarial images and preserve the ranking performance in authentic settings. We perform experiments to evaluate the efficacy of AiD using three state-of-the-art adversarial attacks mounted against standard VRSs. Code and datasets at https://github.com/sisinflab/Denoise-to-protect-VRS.","PeriodicalId":425056,"journal":{"name":"Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval","volume":"70 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3539618.3591971","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
While the integration of product images enhances the recommendation performance of visual-based recommender systems (VRSs), this can make the model vulnerable to adversaries that can produce noised images capable to alter the recommendation behavior. Recently, stronger and stronger adversarial attacks have emerged to raise awareness of these risks; however, effective defense methods are still an urgent open challenge. In this work, we propose "Adversarial Image Denoiser" (AiD), a novel defense method that cleans up the item images by malicious perturbations. In particular, we design a training strategy whose denoising objective is to minimize both the visual differences between clean and adversarial images and preserve the ranking performance in authentic settings. We perform experiments to evaluate the efficacy of AiD using three state-of-the-art adversarial attacks mounted against standard VRSs. Code and datasets at https://github.com/sisinflab/Denoise-to-protect-VRS.