深度残差网络的动态早退出策略

Meiqi Wang, Jianqiao Mo, Jun Lin, Zhongfeng Wang, L. Du
{"title":"深度残差网络的动态早退出策略","authors":"Meiqi Wang, Jianqiao Mo, Jun Lin, Zhongfeng Wang, L. Du","doi":"10.1109/SiPS47522.2019.9020551","DOIUrl":null,"url":null,"abstract":"Early-exit is a kind of technique to terminate a pre-specified computation at an early stage depending on the input samples and has been introduced to reduce energy consumption for Deep Neural Networks (DNNs). Previous early-exit approaches suffered from the burden of manually tuning early-exit loss-weights to find a good trade-off between complexity reduction and system accuracy. In this work, we first propose DynExit, a dynamic loss-weight modification strategy for ResNets, which adaptively modifies the ratio of different exit branches and searches for a proper spot for both accuracy and cost. Then, an efficient hardware unit for early-exit branches is developed, which can be easily integrated to existing hardware architectures of DNNs to reduce average computing latency and energy cost. Experimental results show that the proposed DynExit strategy can reduce up to 43.6% FLOPS compared to the state-of-the-arts approaches. On the other hand, it is able to achieve 1.2% accuracy improvement over the existing end-to-end fixed loss-weight training scheme with comparable computation reduction ratio. The proposed hardware architecture for DynExit is evaluated on the platform of Xilinx Zynq-7000 ZC706 development board. Synthesis results demonstrate that the architecture can achieve high speed with low hardware complexity. To the best of our knowledge, this is the first hardware implementation for early-exit techniques used for DNNs in open literature.","PeriodicalId":256971,"journal":{"name":"2019 IEEE International Workshop on Signal Processing Systems (SiPS)","volume":"173 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"25","resultStr":"{\"title\":\"DynExit: A Dynamic Early-Exit Strategy for Deep Residual Networks\",\"authors\":\"Meiqi Wang, Jianqiao Mo, Jun Lin, Zhongfeng Wang, L. Du\",\"doi\":\"10.1109/SiPS47522.2019.9020551\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Early-exit is a kind of technique to terminate a pre-specified computation at an early stage depending on the input samples and has been introduced to reduce energy consumption for Deep Neural Networks (DNNs). Previous early-exit approaches suffered from the burden of manually tuning early-exit loss-weights to find a good trade-off between complexity reduction and system accuracy. In this work, we first propose DynExit, a dynamic loss-weight modification strategy for ResNets, which adaptively modifies the ratio of different exit branches and searches for a proper spot for both accuracy and cost. Then, an efficient hardware unit for early-exit branches is developed, which can be easily integrated to existing hardware architectures of DNNs to reduce average computing latency and energy cost. Experimental results show that the proposed DynExit strategy can reduce up to 43.6% FLOPS compared to the state-of-the-arts approaches. On the other hand, it is able to achieve 1.2% accuracy improvement over the existing end-to-end fixed loss-weight training scheme with comparable computation reduction ratio. The proposed hardware architecture for DynExit is evaluated on the platform of Xilinx Zynq-7000 ZC706 development board. Synthesis results demonstrate that the architecture can achieve high speed with low hardware complexity. To the best of our knowledge, this is the first hardware implementation for early-exit techniques used for DNNs in open literature.\",\"PeriodicalId\":256971,\"journal\":{\"name\":\"2019 IEEE International Workshop on Signal Processing Systems (SiPS)\",\"volume\":\"173 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"25\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 IEEE International Workshop on Signal Processing Systems (SiPS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SiPS47522.2019.9020551\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE International Workshop on Signal Processing Systems (SiPS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SiPS47522.2019.9020551","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 25

摘要

早期退出是一种根据输入样本在早期阶段终止预先指定的计算的技术,用于减少深度神经网络(dnn)的能量消耗。以前的早期退出方法需要手动调整早期退出损失权重,以便在降低复杂性和系统准确性之间找到一个好的平衡点。在这项工作中,我们首先提出了DynExit,一种针对ResNets的动态减重修改策略,该策略自适应地修改不同退出分支的比例,并在精度和成本两方面搜索合适的位置。然后,开发了一种高效的早期退出分支硬件单元,该单元可以很容易地集成到现有的深度神经网络硬件架构中,以降低平均计算延迟和能量成本。实验结果表明,与最先进的方法相比,所提出的DynExit策略可以降低高达43.6%的FLOPS。另一方面,与现有的端到端固定减重训练方案相比,该方法在计算减少比相当的情况下,准确率提高了1.2%。在Xilinx Zynq-7000 ZC706开发板平台上对所提出的DynExit硬件架构进行了评估。综合结果表明,该体系结构可以在较低的硬件复杂度下实现较高的速度。据我们所知,这是公开文献中用于dnn的早期退出技术的第一个硬件实现。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
DynExit: A Dynamic Early-Exit Strategy for Deep Residual Networks
Early-exit is a kind of technique to terminate a pre-specified computation at an early stage depending on the input samples and has been introduced to reduce energy consumption for Deep Neural Networks (DNNs). Previous early-exit approaches suffered from the burden of manually tuning early-exit loss-weights to find a good trade-off between complexity reduction and system accuracy. In this work, we first propose DynExit, a dynamic loss-weight modification strategy for ResNets, which adaptively modifies the ratio of different exit branches and searches for a proper spot for both accuracy and cost. Then, an efficient hardware unit for early-exit branches is developed, which can be easily integrated to existing hardware architectures of DNNs to reduce average computing latency and energy cost. Experimental results show that the proposed DynExit strategy can reduce up to 43.6% FLOPS compared to the state-of-the-arts approaches. On the other hand, it is able to achieve 1.2% accuracy improvement over the existing end-to-end fixed loss-weight training scheme with comparable computation reduction ratio. The proposed hardware architecture for DynExit is evaluated on the platform of Xilinx Zynq-7000 ZC706 development board. Synthesis results demonstrate that the architecture can achieve high speed with low hardware complexity. To the best of our knowledge, this is the first hardware implementation for early-exit techniques used for DNNs in open literature.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信