DSCNN: Hardware-oriented optimization for Stochastic Computing based Deep Convolutional Neural Networks

Zhe Li, Ao Ren, Ji Li, Qinru Qiu, Yanzhi Wang, Bo Yuan
{"title":"DSCNN: Hardware-oriented optimization for Stochastic Computing based Deep Convolutional Neural Networks","authors":"Zhe Li, Ao Ren, Ji Li, Qinru Qiu, Yanzhi Wang, Bo Yuan","doi":"10.1109/ICCD.2016.7753357","DOIUrl":null,"url":null,"abstract":"Deep Convolutional Neural Networks (DCNN), a branch of Deep Neural Networks which use the deep graph with multiple processing layers, enables the convolutional model to finely abstract the high-level features behind an image. Large-scale applications using DCNN mainly operate in high-performance server clusters, GPUs or FPGA clusters; it is restricted to extend the applications onto mobile/wearable devices and Internet-of-Things (IoT) entities due to high power/energy consumption. Stochastic Computing is a promising method to overcome this shortcoming used in specific hardware-based systems. Many complex arithmetic operations can be implemented with very simple hardware logic in the SC framework, which alleviates the extensive computation complexity. The exploration of network-wise optimization and the revision of network structure with respect to stochastic computing based hardware design have not been discussed in previous work. In this paper, we investigate Deep Stochastic Convolutional Neural Network (DSCNN) for DCNN using stochastic computing. The essential calculation components using SC are designed and evaluated. We propose a joint optimization method to collaborate components guaranteeing a high calculation accuracy in each stage of the network. The structure of original DSCNN is revised to accommodate SC hardware design's simplicity. Experimental Results show that as opposed to software inspired feature extraction block in DSCNN, an optimized hardware oriented feature extraction block achieves as higher as 59.27% calculation precision. And the optimized DSCNN can achieve only 3.48% network test error rate compared to 27.83% for baseline DSCNN using software inspired feature extraction block.","PeriodicalId":297899,"journal":{"name":"2016 IEEE 34th International Conference on Computer Design (ICCD)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"40","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 IEEE 34th International Conference on Computer Design (ICCD)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCD.2016.7753357","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 40

Abstract

Deep Convolutional Neural Networks (DCNN), a branch of Deep Neural Networks which use the deep graph with multiple processing layers, enables the convolutional model to finely abstract the high-level features behind an image. Large-scale applications using DCNN mainly operate in high-performance server clusters, GPUs or FPGA clusters; it is restricted to extend the applications onto mobile/wearable devices and Internet-of-Things (IoT) entities due to high power/energy consumption. Stochastic Computing is a promising method to overcome this shortcoming used in specific hardware-based systems. Many complex arithmetic operations can be implemented with very simple hardware logic in the SC framework, which alleviates the extensive computation complexity. The exploration of network-wise optimization and the revision of network structure with respect to stochastic computing based hardware design have not been discussed in previous work. In this paper, we investigate Deep Stochastic Convolutional Neural Network (DSCNN) for DCNN using stochastic computing. The essential calculation components using SC are designed and evaluated. We propose a joint optimization method to collaborate components guaranteeing a high calculation accuracy in each stage of the network. The structure of original DSCNN is revised to accommodate SC hardware design's simplicity. Experimental Results show that as opposed to software inspired feature extraction block in DSCNN, an optimized hardware oriented feature extraction block achieves as higher as 59.27% calculation precision. And the optimized DSCNN can achieve only 3.48% network test error rate compared to 27.83% for baseline DSCNN using software inspired feature extraction block.
基于随机计算的深度卷积神经网络面向硬件的优化
深度卷积神经网络(Deep Convolutional Neural Networks, DCNN)是深度神经网络的一个分支,它使用具有多个处理层的深度图,使卷积模型能够精细地抽象出图像背后的高级特征。采用DCNN的大规模应用主要运行在高性能服务器集群、gpu或FPGA集群中;由于高功耗/能耗,限制了将应用扩展到移动/可穿戴设备和物联网(IoT)实体。随机计算是一种很有前途的方法,可以克服特定硬件系统中使用的这一缺点。在SC框架中,许多复杂的算术运算可以用非常简单的硬件逻辑来实现,从而减轻了庞大的计算复杂度。关于基于随机计算的硬件设计的网络智能优化的探索和网络结构的修正在以前的工作中没有被讨论。本文研究了基于随机计算的深度随机卷积神经网络(Deep Stochastic Convolutional Neural Network, DSCNN)。设计并评价了基于SC的关键计算组件。我们提出了一种联合优化方法来协作组件,保证了网络各阶段的高计算精度。为了适应单片机硬件设计的简便性,对原有的DSCNN结构进行了修改。实验结果表明,与基于软件的DSCNN特征提取块相比,优化后的面向硬件的特征提取块的计算精度高达59.27%。优化后的DSCNN的网络测试错误率仅为3.48%,而采用软件启发特征提取块的基线DSCNN的网络测试错误率为27.83%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信