{"title":"Enhancing Adversarial Examples Transferability via Ensemble Feature Manifolds","authors":"Dongdong Yang, Wenjie Li, R. Ni, Yao Zhao","doi":"10.1145/3475724.3483608","DOIUrl":null,"url":null,"abstract":"The adversarial attack is a technique that causes intended misclassification by adding imperceptible perturbations to benign inputs. It provides a way to evaluate the robustness of models. Many existing adversarial attacks have achieved good performance in the white-box settings. However, these adversarial examples generated by various attacks typically overfit the particular architecture of the source model, resulting in low transferability in the black-box scenarios. In this work, we propose a novel feature attack method called Features-Ensemble Generative Adversarial Network (FEGAN), which ensembles multiple feature manifolds to capture intrinsic adversarial information that is most likely to cause misclassification of many models, thereby improving the transferability of adversarial examples. Accordingly, a generator trained based on various latent feature vectors of benign inputs can produce adversarial examples containing this adversarial information. Extensive experiments on the MNIST and CIFAR10 datasets demonstrate that the proposed method improves the transferability of adversarial examples while ensuring the attack success rate in the white-box scenario. In addition, the generated adversarial examples are more realistic with distribution close to that of the actual data.","PeriodicalId":279202,"journal":{"name":"Proceedings of the 1st International Workshop on Adversarial Learning for Multimedia","volume":"2 10","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 1st International Workshop on Adversarial Learning for Multimedia","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3475724.3483608","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
The adversarial attack is a technique that causes intended misclassification by adding imperceptible perturbations to benign inputs. It provides a way to evaluate the robustness of models. Many existing adversarial attacks have achieved good performance in the white-box settings. However, these adversarial examples generated by various attacks typically overfit the particular architecture of the source model, resulting in low transferability in the black-box scenarios. In this work, we propose a novel feature attack method called Features-Ensemble Generative Adversarial Network (FEGAN), which ensembles multiple feature manifolds to capture intrinsic adversarial information that is most likely to cause misclassification of many models, thereby improving the transferability of adversarial examples. Accordingly, a generator trained based on various latent feature vectors of benign inputs can produce adversarial examples containing this adversarial information. Extensive experiments on the MNIST and CIFAR10 datasets demonstrate that the proposed method improves the transferability of adversarial examples while ensuring the attack success rate in the white-box scenario. In addition, the generated adversarial examples are more realistic with distribution close to that of the actual data.