具有隐式冗余推测和批归一化重构的28nm 276.55TFLOPS/W稀疏深度神经网络训练处理器

Yang Wang, Yubin Qin, Dazheng Deng, Ji-de Wei, Tianbao Chen, Xinhan Lin, Leibo Liu, Shaojun Wei, S. Yin
{"title":"具有隐式冗余推测和批归一化重构的28nm 276.55TFLOPS/W稀疏深度神经网络训练处理器","authors":"Yang Wang, Yubin Qin, Dazheng Deng, Ji-de Wei, Tianbao Chen, Xinhan Lin, Leibo Liu, Shaojun Wei, S. Yin","doi":"10.23919/VLSICircuits52068.2021.9492420","DOIUrl":null,"url":null,"abstract":"A dynamic weight pruning (DWP) explored processor, named Trainer, is proposed for energy-efficient deep-neural-network (DNN) training on edge-device. It has three key features: 1) A implicit redundancy speculation unit (IRSU) improves 1.46× throughput. 2) A dataflow, allowing a reuse-adaptive dynamic compression and PE regrouping, increases 1.52× utilization. 3) A data-retrieval eliminated batch-normalization (BN) unit (REBU) saves 37.1% of energy. Trainer achieves a peak energy efficiency of 276.55TFLOPS/W. It reduces 2.23× training energy and offers a 1.76× training speedup compared with the state-of-the-art sparse DNN training processor.","PeriodicalId":106356,"journal":{"name":"2021 Symposium on VLSI Circuits","volume":"12 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"12","resultStr":"{\"title\":\"A 28nm 276.55TFLOPS/W Sparse Deep-Neural-Network Training Processor with Implicit Redundancy Speculation and Batch Normalization Reformulation\",\"authors\":\"Yang Wang, Yubin Qin, Dazheng Deng, Ji-de Wei, Tianbao Chen, Xinhan Lin, Leibo Liu, Shaojun Wei, S. Yin\",\"doi\":\"10.23919/VLSICircuits52068.2021.9492420\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"A dynamic weight pruning (DWP) explored processor, named Trainer, is proposed for energy-efficient deep-neural-network (DNN) training on edge-device. It has three key features: 1) A implicit redundancy speculation unit (IRSU) improves 1.46× throughput. 2) A dataflow, allowing a reuse-adaptive dynamic compression and PE regrouping, increases 1.52× utilization. 3) A data-retrieval eliminated batch-normalization (BN) unit (REBU) saves 37.1% of energy. Trainer achieves a peak energy efficiency of 276.55TFLOPS/W. It reduces 2.23× training energy and offers a 1.76× training speedup compared with the state-of-the-art sparse DNN training processor.\",\"PeriodicalId\":106356,\"journal\":{\"name\":\"2021 Symposium on VLSI Circuits\",\"volume\":\"12 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-06-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"12\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 Symposium on VLSI Circuits\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.23919/VLSICircuits52068.2021.9492420\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 Symposium on VLSI Circuits","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.23919/VLSICircuits52068.2021.9492420","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 12

摘要

为了在边缘设备上高效地训练深度神经网络(DNN),提出了一种动态权值修剪(DWP)探索处理器Trainer。它有三个关键特点:1)一个隐式冗余推测单元(IRSU)提高了1.46倍的吞吐量。2)一个数据流,允许自适应复用的动态压缩和PE重组,增加了1.52倍的利用率。3)一种消除数据检索的批归一化(BN)装置(REBU)节能37.1%。Trainer达到276.55TFLOPS/W的峰值能量效率。与最先进的稀疏DNN训练处理器相比,它减少了2.23倍的训练能量,提供了1.76倍的训练加速。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
A 28nm 276.55TFLOPS/W Sparse Deep-Neural-Network Training Processor with Implicit Redundancy Speculation and Batch Normalization Reformulation
A dynamic weight pruning (DWP) explored processor, named Trainer, is proposed for energy-efficient deep-neural-network (DNN) training on edge-device. It has three key features: 1) A implicit redundancy speculation unit (IRSU) improves 1.46× throughput. 2) A dataflow, allowing a reuse-adaptive dynamic compression and PE regrouping, increases 1.52× utilization. 3) A data-retrieval eliminated batch-normalization (BN) unit (REBU) saves 37.1% of energy. Trainer achieves a peak energy efficiency of 276.55TFLOPS/W. It reduces 2.23× training energy and offers a 1.76× training speedup compared with the state-of-the-art sparse DNN training processor.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信