{"title":"具有条件计算和低外部存储器访问的深度卷积神经网络加速器","authors":"Minkyu Kim, Jae-sun Seo","doi":"10.1109/CICC48029.2020.9075931","DOIUrl":null,"url":null,"abstract":"This paper presents an ASIC accelerator for deep convolutional neural networks (DCNNs) featuring a novel conditional computing scheme that synergistically combines precision-cascading with zero-skipping. To reduce many redundant convolution operations that are followed by max-pooling operations, we propose precision-cascading, where the input features are divided into a number of low-precision groups and approximate convolutions with only the most significant bits (MSBs) are performed first. Based on this approximate computation, the full-precision convolution is performed only on the maximum pooling output that is found. This way, the total number of bit-wise convolutions can be reduced by ~2× without affecting the output feature values and with <0.8% degradation in final ImageNet classification accuracy. Precision-cascading provides the added benefit of increased sparsity per low-precision group, which we exploit with zero-skipping to eliminate clock cycles as well as external memory access that involve zero inputs. By jointly optimizing the conditional computing scheme and hardware architecture, the 40nm prototype chip demonstrates a peak energy-efficiency of 8.85 TOPS/W at 0.9V supply and low external memory access of 55.31 MB (or 0.0018 access/MAC) for ImageNet classification with VGG-16 CNN.","PeriodicalId":409525,"journal":{"name":"2020 IEEE Custom Integrated Circuits Conference (CICC)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"Deep Convolutional Neural Network Accelerator Featuring Conditional Computing and Low External Memory Access\",\"authors\":\"Minkyu Kim, Jae-sun Seo\",\"doi\":\"10.1109/CICC48029.2020.9075931\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper presents an ASIC accelerator for deep convolutional neural networks (DCNNs) featuring a novel conditional computing scheme that synergistically combines precision-cascading with zero-skipping. To reduce many redundant convolution operations that are followed by max-pooling operations, we propose precision-cascading, where the input features are divided into a number of low-precision groups and approximate convolutions with only the most significant bits (MSBs) are performed first. Based on this approximate computation, the full-precision convolution is performed only on the maximum pooling output that is found. This way, the total number of bit-wise convolutions can be reduced by ~2× without affecting the output feature values and with <0.8% degradation in final ImageNet classification accuracy. Precision-cascading provides the added benefit of increased sparsity per low-precision group, which we exploit with zero-skipping to eliminate clock cycles as well as external memory access that involve zero inputs. By jointly optimizing the conditional computing scheme and hardware architecture, the 40nm prototype chip demonstrates a peak energy-efficiency of 8.85 TOPS/W at 0.9V supply and low external memory access of 55.31 MB (or 0.0018 access/MAC) for ImageNet classification with VGG-16 CNN.\",\"PeriodicalId\":409525,\"journal\":{\"name\":\"2020 IEEE Custom Integrated Circuits Conference (CICC)\",\"volume\":\"18 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-03-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 IEEE Custom Integrated Circuits Conference (CICC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CICC48029.2020.9075931\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE Custom Integrated Circuits Conference (CICC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CICC48029.2020.9075931","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Deep Convolutional Neural Network Accelerator Featuring Conditional Computing and Low External Memory Access
This paper presents an ASIC accelerator for deep convolutional neural networks (DCNNs) featuring a novel conditional computing scheme that synergistically combines precision-cascading with zero-skipping. To reduce many redundant convolution operations that are followed by max-pooling operations, we propose precision-cascading, where the input features are divided into a number of low-precision groups and approximate convolutions with only the most significant bits (MSBs) are performed first. Based on this approximate computation, the full-precision convolution is performed only on the maximum pooling output that is found. This way, the total number of bit-wise convolutions can be reduced by ~2× without affecting the output feature values and with <0.8% degradation in final ImageNet classification accuracy. Precision-cascading provides the added benefit of increased sparsity per low-precision group, which we exploit with zero-skipping to eliminate clock cycles as well as external memory access that involve zero inputs. By jointly optimizing the conditional computing scheme and hardware architecture, the 40nm prototype chip demonstrates a peak energy-efficiency of 8.85 TOPS/W at 0.9V supply and low external memory access of 55.31 MB (or 0.0018 access/MAC) for ImageNet classification with VGG-16 CNN.