LPE: Logarithm Posit Processing Element for Energy-Efficient Edge-Device Training

Yang Wang, Dazheng Deng, Leibo Liu, Shaojun Wei, S. Yin
{"title":"LPE: Logarithm Posit Processing Element for Energy-Efficient Edge-Device Training","authors":"Yang Wang, Dazheng Deng, Leibo Liu, Shaojun Wei, S. Yin","doi":"10.1109/AICAS51828.2021.9458421","DOIUrl":null,"url":null,"abstract":"Recently, edge-device training has arisen an urgent necessity since it can enhance the model adaptability without causing high transmission cost and privacy issues. Due to the need for a wide data range and high data precision to improve accuracy, DNN training requires much wider floating-point (FP) data for convolution and complicated arithmetics for batch normalization. They lead to massive computation and memory access energy, which yields challenges for power-constrained edge-devices. This paper proposes a novel PE, called LPE, with three innovations to solve this issue. First, LPE stores the operands in the posit format, satisfying both precision and data range with lower bit-width. It reduces training latency and energy for memory access. Second, LPE transfers complicated arithmetics during training into the logarithm domain, including multiplication in convolution layer and division, square, square root in batch normalization layers. It reduces computation energy and improves throughput. Third, LPE contains a two-stage floating-point accumulation unit. It extends the computation range while using the low bit-width accumulator, enhancing precision and reducing power consumption. Evaluated with 28 nm CMOS process, our PE achieves 1.81× power and 1.35× area reduction compared with IEEE 754 float-point 16 (FP16) fused MAC while maintaining the same dynamic range. When performing training with the proposed PE unit, it can achieve 1.97× energy reduction and offer 1.68× speed up.","PeriodicalId":173204,"journal":{"name":"2021 IEEE 3rd International Conference on Artificial Intelligence Circuits and Systems (AICAS)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE 3rd International Conference on Artificial Intelligence Circuits and Systems (AICAS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AICAS51828.2021.9458421","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Recently, edge-device training has arisen an urgent necessity since it can enhance the model adaptability without causing high transmission cost and privacy issues. Due to the need for a wide data range and high data precision to improve accuracy, DNN training requires much wider floating-point (FP) data for convolution and complicated arithmetics for batch normalization. They lead to massive computation and memory access energy, which yields challenges for power-constrained edge-devices. This paper proposes a novel PE, called LPE, with three innovations to solve this issue. First, LPE stores the operands in the posit format, satisfying both precision and data range with lower bit-width. It reduces training latency and energy for memory access. Second, LPE transfers complicated arithmetics during training into the logarithm domain, including multiplication in convolution layer and division, square, square root in batch normalization layers. It reduces computation energy and improves throughput. Third, LPE contains a two-stage floating-point accumulation unit. It extends the computation range while using the low bit-width accumulator, enhancing precision and reducing power consumption. Evaluated with 28 nm CMOS process, our PE achieves 1.81× power and 1.35× area reduction compared with IEEE 754 float-point 16 (FP16) fused MAC while maintaining the same dynamic range. When performing training with the proposed PE unit, it can achieve 1.97× energy reduction and offer 1.68× speed up.
LPE:用于节能边缘设备训练的对数正态处理单元
近年来,由于边缘设备训练可以提高模型的适应性,同时又不会产生高传输成本和隐私问题,因此迫切需要进行边缘设备训练。由于需要广泛的数据范围和高数据精度来提高准确性,DNN训练需要更广泛的浮点(FP)数据进行卷积,并且需要复杂的算法进行批处理归一化。它们会导致大量的计算和内存访问能量,这对功率受限的边缘设备提出了挑战。本文提出了一种新的PE,称为LPE,并通过三个创新来解决这一问题。首先,LPE以正数格式存储操作数,以较低的位宽同时满足精度和数据范围。它减少了训练延迟和内存访问的能量。其次,LPE将训练过程中的复杂算法转移到对数域,包括卷积层中的乘法和批处理归一化层中的除法、平方、平方根。它减少了计算能量,提高了吞吐量。第三,LPE包含一个两级浮点累加单元。它在使用低位宽累加器的同时扩展了计算范围,提高了精度,降低了功耗。采用28 nm CMOS工艺进行评估,与IEEE 754浮点16 (FP16)融合MAC相比,我们的PE在保持相同动态范围的情况下,功耗降低1.81倍,面积减少1.35倍。使用所提出的PE装置进行训练时,可以实现1.97倍的能量降低和1.68倍的速度提高。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信