Robustness of Compressed Deep Neural Networks with Adversarial Training

Yunchun Zhang, Chengjie Li, Wangwang Wang, Yuting Zhong, Xin Zhang, Yu-lin Zhang
{"title":"Robustness of Compressed Deep Neural Networks with Adversarial Training","authors":"Yunchun Zhang, Chengjie Li, Wangwang Wang, Yuting Zhong, Xin Zhang, Yu-lin Zhang","doi":"10.1109/IISA52424.2021.9555552","DOIUrl":null,"url":null,"abstract":"Deep learning models are not applicable on edge computing devices. Consequently, compressed deep learning models gain momentum recently. Meanwhile, adversarial attacks targeting conventional deep neural networks (DNNs) and compressed DNNs are flouring nowadays. This paper firstly surveys the current compressing techniques, including pruning, distillation, quantization and weights sharing. Then, two iterative adversarial attacks, including I-FGSM (Iterative-Fast Gradient Sign Method) and PGD (Project Gradient Descent), are introduced. Three scenarios are built to test each DNN’s robustness against adversarial attacks. Besides, each DNN is trained with samples generated by different adversarial attacks and is then compressed under different pruning rate and tested under different attacks. The experimental results prove firstly that when a DNN is compressed with pruning rate lower than 70.0% is safe and with tiny accuracy decline. Second, iterative adversarial attacks are effective and cause dramatic performance degradation. Third, adversarial training helps to secure the compressed DNNs while lowering transferability of adversarial samples constructed by different attack algorithms.","PeriodicalId":437496,"journal":{"name":"2021 12th International Conference on Information, Intelligence, Systems & Applications (IISA)","volume":"145 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 12th International Conference on Information, Intelligence, Systems & Applications (IISA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IISA52424.2021.9555552","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Deep learning models are not applicable on edge computing devices. Consequently, compressed deep learning models gain momentum recently. Meanwhile, adversarial attacks targeting conventional deep neural networks (DNNs) and compressed DNNs are flouring nowadays. This paper firstly surveys the current compressing techniques, including pruning, distillation, quantization and weights sharing. Then, two iterative adversarial attacks, including I-FGSM (Iterative-Fast Gradient Sign Method) and PGD (Project Gradient Descent), are introduced. Three scenarios are built to test each DNN’s robustness against adversarial attacks. Besides, each DNN is trained with samples generated by different adversarial attacks and is then compressed under different pruning rate and tested under different attacks. The experimental results prove firstly that when a DNN is compressed with pruning rate lower than 70.0% is safe and with tiny accuracy decline. Second, iterative adversarial attacks are effective and cause dramatic performance degradation. Third, adversarial training helps to secure the compressed DNNs while lowering transferability of adversarial samples constructed by different attack algorithms.
具有对抗训练的压缩深度神经网络的鲁棒性
深度学习模型不适用于边缘计算设备。因此,压缩深度学习模型最近获得了动力。同时,针对传统深度神经网络(dnn)和压缩深度神经网络的对抗性攻击也层出不穷。本文首先综述了当前的压缩技术,包括剪枝、蒸馏、量化和权值共享。然后,介绍了迭代快速梯度符号法(I-FGSM)和项目梯度下降法(PGD)两种迭代对抗攻击方法。构建了三个场景来测试每个DNN对对抗性攻击的鲁棒性。此外,每个DNN使用不同对抗性攻击产生的样本进行训练,然后在不同的剪枝率下进行压缩,并在不同的攻击下进行测试。实验结果首先证明,当一个DNN被压缩时,剪枝率低于70.0%是安全的,并且精度下降很小。其次,迭代的对抗性攻击是有效的,但会导致显著的性能下降。第三,对抗训练有助于保护压缩后的dnn,同时降低由不同攻击算法构建的对抗样本的可转移性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信