{"title":"LSFQ:一种用于高性能fpga CNN加速的低精度全整数量化方法","authors":"Zhenshan Bao, Kang Zhan, Wenbo Zhang, Junnan Guo","doi":"10.1109/COOLCHIPS52128.2021.9410327","DOIUrl":null,"url":null,"abstract":"Neural network quantization has become an important research area. Deep networks run with low precision operations at inference time offer power and space advantages over high precision alternatives, and can maintain high accuracy. However, few quantization can demonstrate this advantage on hardware platform, because the design of quantization algorithm lacks the consideration of actual hardware implementation. In this paper, we propose an efficient quantization method for hardware implementation, a learnable parameter soft clipping fully integer quantization (LSFQ), which includes weight quantization and activation quantization with learnable clipping parameter method. The quantization parameters are optimized automatically by back propagation to minimize the loss, then the BatchNorm layer and convolutional layer are fused, and the bias and quantization step size are further quantized. In this way, LSFQ accomplishes integer-only-arithmetic. We evaluate the quantization algorithm on a variety of models including VGG7, mobile-net v2 in CIFAR10 and CIFAR100. The results show that when the quantization reaches 3-bit or 4-bit, the accuracy loss of our method is less than 1 % compared with the full-precision network. In addition, we design an accelerator for the quantization algorithm and deploy it to the FPGA platform to verify the hardware-awareness of our method.","PeriodicalId":103337,"journal":{"name":"2021 IEEE Symposium in Low-Power and High-Speed Chips (COOL CHIPS)","volume":"59 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":"{\"title\":\"LSFQ: A Low Precision Full Integer Quantization for High-Performance FPGA-Based CNN Acceleration\",\"authors\":\"Zhenshan Bao, Kang Zhan, Wenbo Zhang, Junnan Guo\",\"doi\":\"10.1109/COOLCHIPS52128.2021.9410327\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Neural network quantization has become an important research area. Deep networks run with low precision operations at inference time offer power and space advantages over high precision alternatives, and can maintain high accuracy. However, few quantization can demonstrate this advantage on hardware platform, because the design of quantization algorithm lacks the consideration of actual hardware implementation. In this paper, we propose an efficient quantization method for hardware implementation, a learnable parameter soft clipping fully integer quantization (LSFQ), which includes weight quantization and activation quantization with learnable clipping parameter method. The quantization parameters are optimized automatically by back propagation to minimize the loss, then the BatchNorm layer and convolutional layer are fused, and the bias and quantization step size are further quantized. In this way, LSFQ accomplishes integer-only-arithmetic. We evaluate the quantization algorithm on a variety of models including VGG7, mobile-net v2 in CIFAR10 and CIFAR100. The results show that when the quantization reaches 3-bit or 4-bit, the accuracy loss of our method is less than 1 % compared with the full-precision network. In addition, we design an accelerator for the quantization algorithm and deploy it to the FPGA platform to verify the hardware-awareness of our method.\",\"PeriodicalId\":103337,\"journal\":{\"name\":\"2021 IEEE Symposium in Low-Power and High-Speed Chips (COOL CHIPS)\",\"volume\":\"59 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-04-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"6\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE Symposium in Low-Power and High-Speed Chips (COOL CHIPS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/COOLCHIPS52128.2021.9410327\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE Symposium in Low-Power and High-Speed Chips (COOL CHIPS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/COOLCHIPS52128.2021.9410327","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
LSFQ: A Low Precision Full Integer Quantization for High-Performance FPGA-Based CNN Acceleration
Neural network quantization has become an important research area. Deep networks run with low precision operations at inference time offer power and space advantages over high precision alternatives, and can maintain high accuracy. However, few quantization can demonstrate this advantage on hardware platform, because the design of quantization algorithm lacks the consideration of actual hardware implementation. In this paper, we propose an efficient quantization method for hardware implementation, a learnable parameter soft clipping fully integer quantization (LSFQ), which includes weight quantization and activation quantization with learnable clipping parameter method. The quantization parameters are optimized automatically by back propagation to minimize the loss, then the BatchNorm layer and convolutional layer are fused, and the bias and quantization step size are further quantized. In this way, LSFQ accomplishes integer-only-arithmetic. We evaluate the quantization algorithm on a variety of models including VGG7, mobile-net v2 in CIFAR10 and CIFAR100. The results show that when the quantization reaches 3-bit or 4-bit, the accuracy loss of our method is less than 1 % compared with the full-precision network. In addition, we design an accelerator for the quantization algorithm and deploy it to the FPGA platform to verify the hardware-awareness of our method.