Parallel Convolutional Neural Network (CNN) Accelerators Based on Stochastic Computing

Yawen Zhang, Xinyue Zhang, Jiahao Song, Yuan Wang, Ru Huang, Runsheng Wang
{"title":"Parallel Convolutional Neural Network (CNN) Accelerators Based on Stochastic Computing","authors":"Yawen Zhang, Xinyue Zhang, Jiahao Song, Yuan Wang, Ru Huang, Runsheng Wang","doi":"10.1109/SiPS47522.2019.9020615","DOIUrl":null,"url":null,"abstract":"Stochastic computing (SC), which processes the data in the form of random bit streams, has been used in neural networks due to simple logic gates performing complex arithmetic and the inherent high error-tolerance. However, SC-based neural network accelerators suffer from high latency, random fluctuations, and large hardware cost of pseudo-random number generators (PRNG), thus diminishing the advantages of stochastic computing. In this paper, we address these problems with a novel technique of generating bit streams in parallel, which needs only one clock for conversion and significantly reduces the hardware cost. Based on this parallel bitstream generator, we further present two kinds of convolutional neural network (CNN) accelerator architectures with digital and analog circuits, respectively, showing great potential for low-power applications.","PeriodicalId":256971,"journal":{"name":"2019 IEEE International Workshop on Signal Processing Systems (SiPS)","volume":"65 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE International Workshop on Signal Processing Systems (SiPS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SiPS47522.2019.9020615","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 9

Abstract

Stochastic computing (SC), which processes the data in the form of random bit streams, has been used in neural networks due to simple logic gates performing complex arithmetic and the inherent high error-tolerance. However, SC-based neural network accelerators suffer from high latency, random fluctuations, and large hardware cost of pseudo-random number generators (PRNG), thus diminishing the advantages of stochastic computing. In this paper, we address these problems with a novel technique of generating bit streams in parallel, which needs only one clock for conversion and significantly reduces the hardware cost. Based on this parallel bitstream generator, we further present two kinds of convolutional neural network (CNN) accelerator architectures with digital and analog circuits, respectively, showing great potential for low-power applications.
基于随机计算的并行卷积神经网络(CNN)加速器
随机计算(SC)以随机比特流的形式处理数据,由于简单的逻辑门执行复杂的运算和固有的高容错性,已被用于神经网络。然而,基于sc的神经网络加速器存在高延迟、随机波动和伪随机数生成器(PRNG)硬件成本高的问题,从而削弱了随机计算的优势。在本文中,我们用一种新的并行生成比特流的技术来解决这些问题,这种技术只需要一个时钟进行转换,并且大大降低了硬件成本。基于这种并行比特流发生器,我们进一步提出了两种分别具有数字和模拟电路的卷积神经网络(CNN)加速器架构,它们在低功耗应用中具有很大的潜力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信