{"title":"具有对抗训练的压缩深度神经网络的鲁棒性","authors":"Yunchun Zhang, Chengjie Li, Wangwang Wang, Yuting Zhong, Xin Zhang, Yu-lin Zhang","doi":"10.1109/IISA52424.2021.9555552","DOIUrl":null,"url":null,"abstract":"Deep learning models are not applicable on edge computing devices. Consequently, compressed deep learning models gain momentum recently. Meanwhile, adversarial attacks targeting conventional deep neural networks (DNNs) and compressed DNNs are flouring nowadays. This paper firstly surveys the current compressing techniques, including pruning, distillation, quantization and weights sharing. Then, two iterative adversarial attacks, including I-FGSM (Iterative-Fast Gradient Sign Method) and PGD (Project Gradient Descent), are introduced. Three scenarios are built to test each DNN’s robustness against adversarial attacks. Besides, each DNN is trained with samples generated by different adversarial attacks and is then compressed under different pruning rate and tested under different attacks. The experimental results prove firstly that when a DNN is compressed with pruning rate lower than 70.0% is safe and with tiny accuracy decline. Second, iterative adversarial attacks are effective and cause dramatic performance degradation. Third, adversarial training helps to secure the compressed DNNs while lowering transferability of adversarial samples constructed by different attack algorithms.","PeriodicalId":437496,"journal":{"name":"2021 12th International Conference on Information, Intelligence, Systems & Applications (IISA)","volume":"145 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Robustness of Compressed Deep Neural Networks with Adversarial Training\",\"authors\":\"Yunchun Zhang, Chengjie Li, Wangwang Wang, Yuting Zhong, Xin Zhang, Yu-lin Zhang\",\"doi\":\"10.1109/IISA52424.2021.9555552\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Deep learning models are not applicable on edge computing devices. Consequently, compressed deep learning models gain momentum recently. Meanwhile, adversarial attacks targeting conventional deep neural networks (DNNs) and compressed DNNs are flouring nowadays. This paper firstly surveys the current compressing techniques, including pruning, distillation, quantization and weights sharing. Then, two iterative adversarial attacks, including I-FGSM (Iterative-Fast Gradient Sign Method) and PGD (Project Gradient Descent), are introduced. Three scenarios are built to test each DNN’s robustness against adversarial attacks. Besides, each DNN is trained with samples generated by different adversarial attacks and is then compressed under different pruning rate and tested under different attacks. The experimental results prove firstly that when a DNN is compressed with pruning rate lower than 70.0% is safe and with tiny accuracy decline. Second, iterative adversarial attacks are effective and cause dramatic performance degradation. Third, adversarial training helps to secure the compressed DNNs while lowering transferability of adversarial samples constructed by different attack algorithms.\",\"PeriodicalId\":437496,\"journal\":{\"name\":\"2021 12th International Conference on Information, Intelligence, Systems & Applications (IISA)\",\"volume\":\"145 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-07-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 12th International Conference on Information, Intelligence, Systems & Applications (IISA)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IISA52424.2021.9555552\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 12th International Conference on Information, Intelligence, Systems & Applications (IISA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IISA52424.2021.9555552","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Robustness of Compressed Deep Neural Networks with Adversarial Training
Deep learning models are not applicable on edge computing devices. Consequently, compressed deep learning models gain momentum recently. Meanwhile, adversarial attacks targeting conventional deep neural networks (DNNs) and compressed DNNs are flouring nowadays. This paper firstly surveys the current compressing techniques, including pruning, distillation, quantization and weights sharing. Then, two iterative adversarial attacks, including I-FGSM (Iterative-Fast Gradient Sign Method) and PGD (Project Gradient Descent), are introduced. Three scenarios are built to test each DNN’s robustness against adversarial attacks. Besides, each DNN is trained with samples generated by different adversarial attacks and is then compressed under different pruning rate and tested under different attacks. The experimental results prove firstly that when a DNN is compressed with pruning rate lower than 70.0% is safe and with tiny accuracy decline. Second, iterative adversarial attacks are effective and cause dramatic performance degradation. Third, adversarial training helps to secure the compressed DNNs while lowering transferability of adversarial samples constructed by different attack algorithms.