{"title":"Resilience of Pruned Neural Network Against Poisoning Attack","authors":"Bingyin Zhao, Yingjie Lao","doi":"10.1109/MALWARE.2018.8659362","DOIUrl":null,"url":null,"abstract":"In the past several years, machine learning, especially deep learning, has achieved remarkable success in various fields. However, it has been shown recently that machine learning algorithms are vulnerable to well-crafted attacks. For instance, poisoning attack is effective in manipulating the results of a predictive model by deliberately contaminating the training data. In this paper, we investigate the implication of network pruning on the resilience against poisoning attacks. Our experimental results show that pruning can effectively increase the difficulty of poisoning attack, possibly due to the reduced degrees of freedom in the pruned network. For example, in order to degrade the test accuracy below 60% for the MNIST-1-7 dataset, only less than 10 retraining epochs with poisoning data are needed for the original network, while about 16 and 40 epochs are required for the 90% and 99% pruned networks, respectively.","PeriodicalId":200928,"journal":{"name":"2018 13th International Conference on Malicious and Unwanted Software (MALWARE)","volume":"1122 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 13th International Conference on Malicious and Unwanted Software (MALWARE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MALWARE.2018.8659362","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 7
Abstract
In the past several years, machine learning, especially deep learning, has achieved remarkable success in various fields. However, it has been shown recently that machine learning algorithms are vulnerable to well-crafted attacks. For instance, poisoning attack is effective in manipulating the results of a predictive model by deliberately contaminating the training data. In this paper, we investigate the implication of network pruning on the resilience against poisoning attacks. Our experimental results show that pruning can effectively increase the difficulty of poisoning attack, possibly due to the reduced degrees of freedom in the pruned network. For example, in order to degrade the test accuracy below 60% for the MNIST-1-7 dataset, only less than 10 retraining epochs with poisoning data are needed for the original network, while about 16 and 40 epochs are required for the 90% and 99% pruned networks, respectively.