Monte Carlo method based precision analysis of deep convolution nets

Robert Krutsch, S. Naidu
{"title":"Monte Carlo method based precision analysis of deep convolution nets","authors":"Robert Krutsch, S. Naidu","doi":"10.1109/DASIP.2016.7853814","DOIUrl":null,"url":null,"abstract":"Convolution Neural Networks today provide the best results for many image detection and image recognition problems. The network accuracy increase in the past years is obtained through an increase in complexity of the structure and amount of parameters of the deep networks. Memory bandwidth and power consumption constraints are limiting the deployment of such state-of-the-art architecture in low power embedded applications. Reduced coefficient bit depth is one of the most frequently used approach to bring the deep learning neural networks into low power embedded hardware accelerators. In this paper we propose a reduced precision, fixed point implementation that can reduce bandwidth and power consumption significantly. The results show that with an 8bit representation for more than 64% of the parameters less than 0.5% accuracy is lost. As expected, the error resilience varies from layer to layer and convolution kernel to convolution kernel. To cope with this variability and understand what parameter need what type of precision we have developed a Monte Carlo simulation tool that explores the decision space.","PeriodicalId":6494,"journal":{"name":"2016 Conference on Design and Architectures for Signal and Image Processing (DASIP)","volume":"42 1","pages":"162-167"},"PeriodicalIF":0.0000,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 Conference on Design and Architectures for Signal and Image Processing (DASIP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DASIP.2016.7853814","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Convolution Neural Networks today provide the best results for many image detection and image recognition problems. The network accuracy increase in the past years is obtained through an increase in complexity of the structure and amount of parameters of the deep networks. Memory bandwidth and power consumption constraints are limiting the deployment of such state-of-the-art architecture in low power embedded applications. Reduced coefficient bit depth is one of the most frequently used approach to bring the deep learning neural networks into low power embedded hardware accelerators. In this paper we propose a reduced precision, fixed point implementation that can reduce bandwidth and power consumption significantly. The results show that with an 8bit representation for more than 64% of the parameters less than 0.5% accuracy is lost. As expected, the error resilience varies from layer to layer and convolution kernel to convolution kernel. To cope with this variability and understand what parameter need what type of precision we have developed a Monte Carlo simulation tool that explores the decision space.
基于蒙特卡罗方法的深度卷积网络精度分析
卷积神经网络今天为许多图像检测和图像识别问题提供了最好的结果。过去几年网络精度的提高是通过深度网络结构复杂性和参数数量的增加来实现的。内存带宽和功耗限制限制了这种最先进架构在低功耗嵌入式应用程序中的部署。降低位深度系数是将深度学习神经网络引入低功耗嵌入式硬件加速器中最常用的方法之一。在本文中,我们提出了一种降低精度的定点实现,可以显着降低带宽和功耗。结果表明,在使用8位表示的情况下,超过64%的参数的精度损失小于0.5%。正如预期的那样,错误恢复能力随层和卷积核的不同而变化。为了应对这种可变性并理解什么参数需要什么类型的精度,我们开发了一个蒙特卡罗模拟工具来探索决策空间。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信