K. Nakata, D. Miyashita, A. Maki, F. Tachibana, S. Sasaki, J. Deguchi, Ryuichi Fujimoto
{"title":"pareto最优低成本精确CNN量化策略","authors":"K. Nakata, D. Miyashita, A. Maki, F. Tachibana, S. Sasaki, J. Deguchi, Ryuichi Fujimoto","doi":"10.1109/AICAS51828.2021.9458452","DOIUrl":null,"url":null,"abstract":"Quantization is an effective technique to reduce memory and computational costs for inference of convolutional neural networks (CNNs). However, it has not been clarified which model can achieve higher recognition accuracy with lower memory and computational costs: a fat model (large number of parameters) quantized to an extremely low bit width (e.g., 1 or 2 bits) or a slim model (small number of parameters) quantized to moderately low bit width (e.g., 4 or 5 bits). To answer this question, we define a metric that combines the number of parameters and computations with bit widths of quantized weight parameters. Using this metric, we demonstrate that Pareto-optimal performance, where the best accuracy is obtained at a given memory or computational cost, is achieved when a slim model is moderately quantized rather than when a fat model is extremely quantized. Moreover, employing a strategy based on this finding, we empirically show that the Pareto frontier is improved by 4.3× under a post-training quantization scenario on the ImageNet dataset.","PeriodicalId":173204,"journal":{"name":"2021 IEEE 3rd International Conference on Artificial Intelligence Circuits and Systems (AICAS)","volume":"63 6 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Quantization Strategy for Pareto-optimally Low-cost and Accurate CNN\",\"authors\":\"K. Nakata, D. Miyashita, A. Maki, F. Tachibana, S. Sasaki, J. Deguchi, Ryuichi Fujimoto\",\"doi\":\"10.1109/AICAS51828.2021.9458452\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Quantization is an effective technique to reduce memory and computational costs for inference of convolutional neural networks (CNNs). However, it has not been clarified which model can achieve higher recognition accuracy with lower memory and computational costs: a fat model (large number of parameters) quantized to an extremely low bit width (e.g., 1 or 2 bits) or a slim model (small number of parameters) quantized to moderately low bit width (e.g., 4 or 5 bits). To answer this question, we define a metric that combines the number of parameters and computations with bit widths of quantized weight parameters. Using this metric, we demonstrate that Pareto-optimal performance, where the best accuracy is obtained at a given memory or computational cost, is achieved when a slim model is moderately quantized rather than when a fat model is extremely quantized. Moreover, employing a strategy based on this finding, we empirically show that the Pareto frontier is improved by 4.3× under a post-training quantization scenario on the ImageNet dataset.\",\"PeriodicalId\":173204,\"journal\":{\"name\":\"2021 IEEE 3rd International Conference on Artificial Intelligence Circuits and Systems (AICAS)\",\"volume\":\"63 6 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-06-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE 3rd International Conference on Artificial Intelligence Circuits and Systems (AICAS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/AICAS51828.2021.9458452\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE 3rd International Conference on Artificial Intelligence Circuits and Systems (AICAS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AICAS51828.2021.9458452","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Quantization Strategy for Pareto-optimally Low-cost and Accurate CNN
Quantization is an effective technique to reduce memory and computational costs for inference of convolutional neural networks (CNNs). However, it has not been clarified which model can achieve higher recognition accuracy with lower memory and computational costs: a fat model (large number of parameters) quantized to an extremely low bit width (e.g., 1 or 2 bits) or a slim model (small number of parameters) quantized to moderately low bit width (e.g., 4 or 5 bits). To answer this question, we define a metric that combines the number of parameters and computations with bit widths of quantized weight parameters. Using this metric, we demonstrate that Pareto-optimal performance, where the best accuracy is obtained at a given memory or computational cost, is achieved when a slim model is moderately quantized rather than when a fat model is extremely quantized. Moreover, employing a strategy based on this finding, we empirically show that the Pareto frontier is improved by 4.3× under a post-training quantization scenario on the ImageNet dataset.