{"title":"保护基于深度学习的异常检测系统免受白盒攻击和后门攻击","authors":"Khaled Alrawashdeh, Stephen Goldsmith","doi":"10.1109/ISTAS50296.2020.9462227","DOIUrl":null,"url":null,"abstract":"Deep Neural Network (DNN) has witnessed rapid progress and significant successes in the recent years. Wide range of applications depends on the high performance of deep learning to solve real-life challenges. Deep learning is being applied in many safety-critical environments. However, deep neural networks have been recently found vulnerable to adversarial examples and backdoor attacks. Stealthy adversarial examples and backdoor attacks can easily fool deep neural networks to generate the wrong results. The risk of adversarial examples attacks that target deep learning models impedes the wide deployment of deep neural networks in safety-critical environments. In this work we propose a defensive technique for deep learning by combining activation function and neurons pruning to reduce the effects of adversarial examples and backdoor attacks. We evaluate the efficacy of the method on an anomaly detection application using Deep Belief Network (DBN) and Coupled Generative Adversarial Network (CoGAN). The method reduces the loss of accuracy from the attacks from an average 10% to 2% using DBN and from an average 14% to 2% using CoGAN. We evaluate the method using two benchmark datasets: NSL-KDD and ransomware.","PeriodicalId":196560,"journal":{"name":"2020 IEEE International Symposium on Technology and Society (ISTAS)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Defending Deep Learning Based Anomaly Detection Systems Against White-Box Adversarial Examples and Backdoor Attacks\",\"authors\":\"Khaled Alrawashdeh, Stephen Goldsmith\",\"doi\":\"10.1109/ISTAS50296.2020.9462227\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Deep Neural Network (DNN) has witnessed rapid progress and significant successes in the recent years. Wide range of applications depends on the high performance of deep learning to solve real-life challenges. Deep learning is being applied in many safety-critical environments. However, deep neural networks have been recently found vulnerable to adversarial examples and backdoor attacks. Stealthy adversarial examples and backdoor attacks can easily fool deep neural networks to generate the wrong results. The risk of adversarial examples attacks that target deep learning models impedes the wide deployment of deep neural networks in safety-critical environments. In this work we propose a defensive technique for deep learning by combining activation function and neurons pruning to reduce the effects of adversarial examples and backdoor attacks. We evaluate the efficacy of the method on an anomaly detection application using Deep Belief Network (DBN) and Coupled Generative Adversarial Network (CoGAN). The method reduces the loss of accuracy from the attacks from an average 10% to 2% using DBN and from an average 14% to 2% using CoGAN. We evaluate the method using two benchmark datasets: NSL-KDD and ransomware.\",\"PeriodicalId\":196560,\"journal\":{\"name\":\"2020 IEEE International Symposium on Technology and Society (ISTAS)\",\"volume\":\"48 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-11-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 IEEE International Symposium on Technology and Society (ISTAS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ISTAS50296.2020.9462227\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE International Symposium on Technology and Society (ISTAS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISTAS50296.2020.9462227","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Defending Deep Learning Based Anomaly Detection Systems Against White-Box Adversarial Examples and Backdoor Attacks
Deep Neural Network (DNN) has witnessed rapid progress and significant successes in the recent years. Wide range of applications depends on the high performance of deep learning to solve real-life challenges. Deep learning is being applied in many safety-critical environments. However, deep neural networks have been recently found vulnerable to adversarial examples and backdoor attacks. Stealthy adversarial examples and backdoor attacks can easily fool deep neural networks to generate the wrong results. The risk of adversarial examples attacks that target deep learning models impedes the wide deployment of deep neural networks in safety-critical environments. In this work we propose a defensive technique for deep learning by combining activation function and neurons pruning to reduce the effects of adversarial examples and backdoor attacks. We evaluate the efficacy of the method on an anomaly detection application using Deep Belief Network (DBN) and Coupled Generative Adversarial Network (CoGAN). The method reduces the loss of accuracy from the attacks from an average 10% to 2% using DBN and from an average 14% to 2% using CoGAN. We evaluate the method using two benchmark datasets: NSL-KDD and ransomware.