{"title":"Impact of Low-Bitwidth Quantization on the Adversarial Robustness for Embedded Neural Networks","authors":"Rémi Bernhard, Pierre-Alain Moëllic, J. Dutertre","doi":"10.1109/CW.2019.00057","DOIUrl":null,"url":null,"abstract":"As the will to deploy neural network models on embedded systems grows, and considering the related memory footprint and energy consumption requirements, finding lighter solutions to store neural networks such as parameter quantization and more efficient inference methods becomes major research topics. Parallel to that, adversarial machine learning has risen recently, unveiling some critical flaws of machine learning models, especially neural networks. In particular, perturbed inputs called adversarial examples have been shown to fool a model into making incorrect predictions. In this paper, we investigate the adversarial robustness of quantized neural networks under different attacks. We show that quantization is not a robust protection when considering advanced threats and may result in severe form of gradient masking which leads to a false impression of security. However, and interestingly, we experimentally observe poor transferability capacities between full-precision and quantized models and between models with different quantization levels which we explain by the quantization value shift phenomenon and gradient misalignment.","PeriodicalId":117409,"journal":{"name":"2019 International Conference on Cyberworlds (CW)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"10","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 International Conference on Cyberworlds (CW)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CW.2019.00057","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 10
Abstract
As the will to deploy neural network models on embedded systems grows, and considering the related memory footprint and energy consumption requirements, finding lighter solutions to store neural networks such as parameter quantization and more efficient inference methods becomes major research topics. Parallel to that, adversarial machine learning has risen recently, unveiling some critical flaws of machine learning models, especially neural networks. In particular, perturbed inputs called adversarial examples have been shown to fool a model into making incorrect predictions. In this paper, we investigate the adversarial robustness of quantized neural networks under different attacks. We show that quantization is not a robust protection when considering advanced threats and may result in severe form of gradient masking which leads to a false impression of security. However, and interestingly, we experimentally observe poor transferability capacities between full-precision and quantized models and between models with different quantization levels which we explain by the quantization value shift phenomenon and gradient misalignment.