Zijie Li, Long Zhang, Jun Yan, Jian Zhang, Zhenyu Zhang, T. Tse
{"title":"PEACEPACT: Prioritizing Examples to Accelerate Perturbation-Based Adversary Generation for DNN Classification Testing","authors":"Zijie Li, Long Zhang, Jun Yan, Jian Zhang, Zhenyu Zhang, T. Tse","doi":"10.1109/QRS51102.2020.00059","DOIUrl":null,"url":null,"abstract":"Deep neural networks (DNNs) have been widely used in classification tasks. Studies have shown that DNNs may be fooled by artificial examples known as adversaries. A common technique for testing the robustness of a classification is to apply perturbations (such as random noise) to existing examples and try many of them iteratively, but it is very tedious and time-consuming. In this paper, we propose a technique to select adversaries more effectively. We study the vulnerability of examples by exploiting their class distinguishability. In this way, we can evaluate the probability of generating adversaries from each example, and prioritize all the examples accordingly. We have conducted an empirical study using a classic DNN model on four common datasets. The results reveal that the vulnerability of examples has a strong relationship with distinguishability. The effectiveness of our technique is demonstrated through 98.90 to 99.68% improvements in the F-measure.","PeriodicalId":301814,"journal":{"name":"2020 IEEE 20th International Conference on Software Quality, Reliability and Security (QRS)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE 20th International Conference on Software Quality, Reliability and Security (QRS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/QRS51102.2020.00059","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6
Abstract
Deep neural networks (DNNs) have been widely used in classification tasks. Studies have shown that DNNs may be fooled by artificial examples known as adversaries. A common technique for testing the robustness of a classification is to apply perturbations (such as random noise) to existing examples and try many of them iteratively, but it is very tedious and time-consuming. In this paper, we propose a technique to select adversaries more effectively. We study the vulnerability of examples by exploiting their class distinguishability. In this way, we can evaluate the probability of generating adversaries from each example, and prioritize all the examples accordingly. We have conducted an empirical study using a classic DNN model on four common datasets. The results reveal that the vulnerability of examples has a strong relationship with distinguishability. The effectiveness of our technique is demonstrated through 98.90 to 99.68% improvements in the F-measure.