{"title":"A Novel Image Perturbation Approach: Perturbing Latent Representation","authors":"Nader Asadi, M. Eftekhari","doi":"10.1109/IranianCEE.2019.8786388","DOIUrl":null,"url":null,"abstract":"Deep neural networks are state-of-art models in computer vision and image recognition problems. However, it is shown that these models are highly vulnerable to intentionally perturbed inputs named adversarial examples. This problem has attracted a lot of attention in recent years. In this paper, a novel approach is proposed for generating adversarial examples by perturbing latent representation of an input image that causes to mislead trained classifier network. Also, it is shown that perturbing dense representation of image results in transforming key features of it with respect to classification task. Our experimental results show that this slight transformation in the features of the image can easily fool the classifier network. We also show the impact of adding perturbations with the large magnitude to the corresponding generated adversarial example.","PeriodicalId":6683,"journal":{"name":"2019 27th Iranian Conference on Electrical Engineering (ICEE)","volume":"47 1","pages":"1895-1899"},"PeriodicalIF":0.0000,"publicationDate":"2019-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 27th Iranian Conference on Electrical Engineering (ICEE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IranianCEE.2019.8786388","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Deep neural networks are state-of-art models in computer vision and image recognition problems. However, it is shown that these models are highly vulnerable to intentionally perturbed inputs named adversarial examples. This problem has attracted a lot of attention in recent years. In this paper, a novel approach is proposed for generating adversarial examples by perturbing latent representation of an input image that causes to mislead trained classifier network. Also, it is shown that perturbing dense representation of image results in transforming key features of it with respect to classification task. Our experimental results show that this slight transformation in the features of the image can easily fool the classifier network. We also show the impact of adding perturbations with the large magnitude to the corresponding generated adversarial example.