A Scalable Low-power Accelerator for Sparse Recurrent Neural Networks

Jingdian Wang, Youyu Wu, Jinjiang Yang
{"title":"A Scalable Low-power Accelerator for Sparse Recurrent Neural Networks","authors":"Jingdian Wang, Youyu Wu, Jinjiang Yang","doi":"10.1109/AUTEEE50969.2020.9315656","DOIUrl":null,"url":null,"abstract":"As the deep learning techniques develop, the Recurrent Neural Networks are widely used, especially in speech recognition and natural language processing applications. However, the data dependency and low data reuse make RNNs hard to process on low power platforms. In this paper, we designed a voltage-scalable Low power Accelerator for SparsE RNN named LASER. Firstly, the sparse-RNN is analyzed and the processing array is designed after the network compression approach. Due to the imbalanced workload of sparse-RNN, we introduced the voltage-scaling method to keep the architecture with low power and high throughput. As the methods applied to EESEN, the system power efficiency can be improved significantly.","PeriodicalId":6767,"journal":{"name":"2020 IEEE 3rd International Conference on Automation, Electronics and Electrical Engineering (AUTEEE)","volume":"1 1","pages":"129-132"},"PeriodicalIF":0.0000,"publicationDate":"2020-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE 3rd International Conference on Automation, Electronics and Electrical Engineering (AUTEEE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AUTEEE50969.2020.9315656","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

As the deep learning techniques develop, the Recurrent Neural Networks are widely used, especially in speech recognition and natural language processing applications. However, the data dependency and low data reuse make RNNs hard to process on low power platforms. In this paper, we designed a voltage-scalable Low power Accelerator for SparsE RNN named LASER. Firstly, the sparse-RNN is analyzed and the processing array is designed after the network compression approach. Due to the imbalanced workload of sparse-RNN, we introduced the voltage-scaling method to keep the architecture with low power and high throughput. As the methods applied to EESEN, the system power efficiency can be improved significantly.
稀疏递归神经网络的可扩展低功耗加速器
随着深度学习技术的发展,递归神经网络得到了广泛的应用,特别是在语音识别和自然语言处理领域。然而,数据依赖性和低数据重用性使得rnn难以在低功耗平台上进行处理。本文设计了一种用于稀疏RNN的电压可伸缩低功率加速器LASER。首先,对稀疏rnn进行了分析,并根据网络压缩方法设计了处理阵列。针对稀疏rnn工作负载不均衡的问题,我们引入了电压缩放的方法来保持低功耗和高吞吐量的架构。将这些方法应用到EESEN中,可以显著提高系统的功率效率。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信