基于初步部分计算策略的CNN池化层功耗降低

Mehdi Ahmadi, S. Vakili, J. Langlois, W. Gross, Mehdi Ahmadi, S. Vakili
{"title":"基于初步部分计算策略的CNN池化层功耗降低","authors":"Mehdi Ahmadi, S. Vakili, J. Langlois, W. Gross, Mehdi Ahmadi, S. Vakili","doi":"10.1109/NEWCAS.2018.8585433","DOIUrl":null,"url":null,"abstract":"Convolutional neural networks (CNNs) are responsible for many recent successes in the computer vision field and are now the dominant approach for image classification. However, CNN-based methods perform many convolution operations and have high power consumption which makes them difficult to deploy on mobile devices. In this paper, we propose a new method to reduce CNN power consumption by simplifying computations before max-pooling layers. The proposed method estimates the output of the max-pooling layer by approximating the preceding convolutional layer with a preliminary partial computation. Then, the method performs a complementary computation to generate an exact convolution output only for the selected feature. We also present an analysis of the approximation parameters. Simulation results show that the proposed method reduces the power consumption by 21% and the silicon area by 19% with negligible degradation in classification accuracy for the CIFAR−10 dataset.","PeriodicalId":112526,"journal":{"name":"2018 16th IEEE International New Circuits and Systems Conference (NEWCAS)","volume":"68 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"13","resultStr":"{\"title\":\"Power Reduction in CNN Pooling Layers with a Preliminary Partial Computation Strategy\",\"authors\":\"Mehdi Ahmadi, S. Vakili, J. Langlois, W. Gross, Mehdi Ahmadi, S. Vakili\",\"doi\":\"10.1109/NEWCAS.2018.8585433\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Convolutional neural networks (CNNs) are responsible for many recent successes in the computer vision field and are now the dominant approach for image classification. However, CNN-based methods perform many convolution operations and have high power consumption which makes them difficult to deploy on mobile devices. In this paper, we propose a new method to reduce CNN power consumption by simplifying computations before max-pooling layers. The proposed method estimates the output of the max-pooling layer by approximating the preceding convolutional layer with a preliminary partial computation. Then, the method performs a complementary computation to generate an exact convolution output only for the selected feature. We also present an analysis of the approximation parameters. Simulation results show that the proposed method reduces the power consumption by 21% and the silicon area by 19% with negligible degradation in classification accuracy for the CIFAR−10 dataset.\",\"PeriodicalId\":112526,\"journal\":{\"name\":\"2018 16th IEEE International New Circuits and Systems Conference (NEWCAS)\",\"volume\":\"68 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-06-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"13\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2018 16th IEEE International New Circuits and Systems Conference (NEWCAS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/NEWCAS.2018.8585433\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 16th IEEE International New Circuits and Systems Conference (NEWCAS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/NEWCAS.2018.8585433","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 13

摘要

卷积神经网络(cnn)最近在计算机视觉领域取得了许多成功,现在是图像分类的主要方法。然而,基于cnn的方法需要进行大量的卷积运算,并且具有高功耗,这使得它们难以在移动设备上部署。在本文中,我们提出了一种通过简化最大池化层之前的计算来降低CNN功耗的新方法。该方法通过初步的局部计算近似前一卷积层来估计最大池化层的输出。然后,该方法执行互补计算,仅为所选特征生成精确的卷积输出。我们也给出了近似参数的分析。仿真结果表明,对于CIFAR−10数据集,该方法的功耗降低了21%,硅面积减少了19%,分类精度的下降可以忽略不计。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Power Reduction in CNN Pooling Layers with a Preliminary Partial Computation Strategy
Convolutional neural networks (CNNs) are responsible for many recent successes in the computer vision field and are now the dominant approach for image classification. However, CNN-based methods perform many convolution operations and have high power consumption which makes them difficult to deploy on mobile devices. In this paper, we propose a new method to reduce CNN power consumption by simplifying computations before max-pooling layers. The proposed method estimates the output of the max-pooling layer by approximating the preceding convolutional layer with a preliminary partial computation. Then, the method performs a complementary computation to generate an exact convolution output only for the selected feature. We also present an analysis of the approximation parameters. Simulation results show that the proposed method reduces the power consumption by 21% and the silicon area by 19% with negligible degradation in classification accuracy for the CIFAR−10 dataset.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信