{"title":"Quantized Convolutional Neural Networks Robustness under Perturbation.","authors":"Jack Langille, Issam Hammad, Guy Kember","doi":"10.12688/f1000research.163144.1","DOIUrl":null,"url":null,"abstract":"<p><p>Contemporary machine learning models are increasingly becoming restricted by size and subsequent operations per forward pass, demanding increasing compute requirements. Quantization has emerged as a convenient approach to addressing this, in which weights and activations are mapped from their conventionally used floating-point 32-bit numeric representations to lower precision integers. This process introduces significant reductions in inference time and simplifies the hardware requirements. It is a well-studied result that the performance of such reduced precision models is congruent with their floating-point counterparts. However, there is a lack of literature that addresses the performance of quantized models in a perturbed input space, as is common when stress testing regular full-precision models, particularly for real-world deployments. We focus on addressing this gap in the context of 8-bit quantized convolutional neural networks (CNNs). We study three state-of-the-art CNNs: ResNet-18, VGG-16, and SqueezeNet1_1, and subject their floating point and fixed point forms to various noise regimes with varying intensities. We characterize performance in terms of traditional metrics, including top-1 and top-5 accuracy, as well as the F1 score. We also introduce a new metric, the Kullback-Liebler divergence of the two output distributions for a given floating-point/fixed-point model pair, as a means to examine how the model's output distribution has changed as a result of quantization, which, we contend, can be interpreted as a proxy for model similarity in decision making. We find that across all three models and under each perturbation scheme, the relative error between the quantized and full-precision model was consistently low. We also find that Kullback-Liebler divergence was on the same order of magnitude as the unperturbed tests across all perturbation regimes except Brownian noise, where significant divergences were observed for VGG-16 and SqueezeNet1_1.</p>","PeriodicalId":12260,"journal":{"name":"F1000Research","volume":"14 ","pages":"419"},"PeriodicalIF":0.0000,"publicationDate":"2025-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12041843/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"F1000Research","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.12688/f1000research.163144.1","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"Q2","JCRName":"Pharmacology, Toxicology and Pharmaceutics","Score":null,"Total":0}
引用次数: 0
Abstract
Contemporary machine learning models are increasingly becoming restricted by size and subsequent operations per forward pass, demanding increasing compute requirements. Quantization has emerged as a convenient approach to addressing this, in which weights and activations are mapped from their conventionally used floating-point 32-bit numeric representations to lower precision integers. This process introduces significant reductions in inference time and simplifies the hardware requirements. It is a well-studied result that the performance of such reduced precision models is congruent with their floating-point counterparts. However, there is a lack of literature that addresses the performance of quantized models in a perturbed input space, as is common when stress testing regular full-precision models, particularly for real-world deployments. We focus on addressing this gap in the context of 8-bit quantized convolutional neural networks (CNNs). We study three state-of-the-art CNNs: ResNet-18, VGG-16, and SqueezeNet1_1, and subject their floating point and fixed point forms to various noise regimes with varying intensities. We characterize performance in terms of traditional metrics, including top-1 and top-5 accuracy, as well as the F1 score. We also introduce a new metric, the Kullback-Liebler divergence of the two output distributions for a given floating-point/fixed-point model pair, as a means to examine how the model's output distribution has changed as a result of quantization, which, we contend, can be interpreted as a proxy for model similarity in decision making. We find that across all three models and under each perturbation scheme, the relative error between the quantized and full-precision model was consistently low. We also find that Kullback-Liebler divergence was on the same order of magnitude as the unperturbed tests across all perturbation regimes except Brownian noise, where significant divergences were observed for VGG-16 and SqueezeNet1_1.
F1000ResearchPharmacology, Toxicology and Pharmaceutics-Pharmacology, Toxicology and Pharmaceutics (all)
CiteScore
5.00
自引率
0.00%
发文量
1646
审稿时长
1 weeks
期刊介绍:
F1000Research publishes articles and other research outputs reporting basic scientific, scholarly, translational and clinical research across the physical and life sciences, engineering, medicine, social sciences and humanities. F1000Research is a scholarly publication platform set up for the scientific, scholarly and medical research community; each article has at least one author who is a qualified researcher, scholar or clinician actively working in their speciality and who has made a key contribution to the article. Articles must be original (not duplications). All research is suitable irrespective of the perceived level of interest or novelty; we welcome confirmatory and negative results, as well as null studies. F1000Research publishes different type of research, including clinical trials, systematic reviews, software tools, method articles, and many others. Reviews and Opinion articles providing a balanced and comprehensive overview of the latest discoveries in a particular field, or presenting a personal perspective on recent developments, are also welcome. See the full list of article types we accept for more information.