{"title":"A Scalable Low-power Accelerator for Sparse Recurrent Neural Networks","authors":"Jingdian Wang, Youyu Wu, Jinjiang Yang","doi":"10.1109/AUTEEE50969.2020.9315656","DOIUrl":null,"url":null,"abstract":"As the deep learning techniques develop, the Recurrent Neural Networks are widely used, especially in speech recognition and natural language processing applications. However, the data dependency and low data reuse make RNNs hard to process on low power platforms. In this paper, we designed a voltage-scalable Low power Accelerator for SparsE RNN named LASER. Firstly, the sparse-RNN is analyzed and the processing array is designed after the network compression approach. Due to the imbalanced workload of sparse-RNN, we introduced the voltage-scaling method to keep the architecture with low power and high throughput. As the methods applied to EESEN, the system power efficiency can be improved significantly.","PeriodicalId":6767,"journal":{"name":"2020 IEEE 3rd International Conference on Automation, Electronics and Electrical Engineering (AUTEEE)","volume":"1 1","pages":"129-132"},"PeriodicalIF":0.0000,"publicationDate":"2020-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE 3rd International Conference on Automation, Electronics and Electrical Engineering (AUTEEE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AUTEEE50969.2020.9315656","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
As the deep learning techniques develop, the Recurrent Neural Networks are widely used, especially in speech recognition and natural language processing applications. However, the data dependency and low data reuse make RNNs hard to process on low power platforms. In this paper, we designed a voltage-scalable Low power Accelerator for SparsE RNN named LASER. Firstly, the sparse-RNN is analyzed and the processing array is designed after the network compression approach. Due to the imbalanced workload of sparse-RNN, we introduced the voltage-scaling method to keep the architecture with low power and high throughput. As the methods applied to EESEN, the system power efficiency can be improved significantly.