{"title":"Defense-Net: Defend Against a Wide Range of Adversarial Attacks through Adversarial Detector","authors":"A. S. Rakin, Deliang Fan","doi":"10.1109/ISVLSI.2019.00067","DOIUrl":null,"url":null,"abstract":"Recent studies have demonstrated that Deep Neural Networks(DNNs) are vulnerable to adversarial input perturbations: meticulously engineered slight perturbations can result in inappropriate categorization of valid images. Adversarial Training has been one of the successful defense approaches in recent times. In this work, we propose an alternative to adversarial training by training a separate model with adversarial examples instead of the original classifier. We train an adversarial detector network known as 'Defense-Net' with strong adversary while training the original classifier with only clean training data. We propose a new adversarial cross entropy loss function to train Defense-Net appropriately differentiate between different adversarial examples. Defense-Net solves three major concerns regarding the development of a successful adversarial defense method. First, our defense does not have clean data accuracy degradation in contrast to traditional adversarial training based defenses. Second, we demonstrate this resiliency with experiments on the MNIST and CIFAR-10 data sets, and show that the state-of-the-art accuracy under the most powerful known white-box attack was increased from 94.02 % to 99.2 % on MNIST, and 47 % to 94.79 % on CIFAR-10. Finally, unlike most recent defenses, our approach does not suffer from obfuscated gradient and can successfully defend strong BPDA, PGD, FGSM and C & W attacks.","PeriodicalId":6703,"journal":{"name":"2019 IEEE Computer Society Annual Symposium on VLSI (ISVLSI)","volume":"86 1","pages":"332-337"},"PeriodicalIF":0.0000,"publicationDate":"2019-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE Computer Society Annual Symposium on VLSI (ISVLSI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISVLSI.2019.00067","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4
Abstract
Recent studies have demonstrated that Deep Neural Networks(DNNs) are vulnerable to adversarial input perturbations: meticulously engineered slight perturbations can result in inappropriate categorization of valid images. Adversarial Training has been one of the successful defense approaches in recent times. In this work, we propose an alternative to adversarial training by training a separate model with adversarial examples instead of the original classifier. We train an adversarial detector network known as 'Defense-Net' with strong adversary while training the original classifier with only clean training data. We propose a new adversarial cross entropy loss function to train Defense-Net appropriately differentiate between different adversarial examples. Defense-Net solves three major concerns regarding the development of a successful adversarial defense method. First, our defense does not have clean data accuracy degradation in contrast to traditional adversarial training based defenses. Second, we demonstrate this resiliency with experiments on the MNIST and CIFAR-10 data sets, and show that the state-of-the-art accuracy under the most powerful known white-box attack was increased from 94.02 % to 99.2 % on MNIST, and 47 % to 94.79 % on CIFAR-10. Finally, unlike most recent defenses, our approach does not suffer from obfuscated gradient and can successfully defend strong BPDA, PGD, FGSM and C & W attacks.