嵌入式系统中具有硬件感知的分层自适应调度的NAS框架

Chuxi Li, Xiaoya Fan, Shengbing Zhang, Zhao Yang, Miao Wang, Danghui Wang, Meng Zhang
{"title":"嵌入式系统中具有硬件感知的分层自适应调度的NAS框架","authors":"Chuxi Li, Xiaoya Fan, Shengbing Zhang, Zhao Yang, Miao Wang, Danghui Wang, Meng Zhang","doi":"10.1145/3394885.3431536","DOIUrl":null,"url":null,"abstract":"Neural Architecture Search (NAS) has been proven to be an effective solution for building Deep Convolutional Neural Network (DCNN) models automatically. Subsequently, several hardware-aware NAS frameworks incorporate hardware latency into the search objectives to avoid the potential risk that the searched network cannot be deployed on target platforms. However, the mismatch between NAS and hardware persists due to the absent of rethinking the applicability of the searched network layer characteristics and hardware mapping. A convolution neural network layer can be executed on various dataflows of hardware with different performance, with which the characteristics of on-chip data using varies to fit the parallel structure. This mismatch also results in significant performance degradation for some maladaptive layers obtained from NAS, which might achieved a much better latency when the adopted dataflow changes. To address the issue that the network latency is insufficient to evaluate the deployment efficiency, this paper proposes a novel hardware-aware NAS framework in consideration of the adaptability between layers and dataflow patterns. Beside, we develop an optimized layer adaptive data scheduling strategy as well as a coarse-grained reconfigurable computing architecture so as to deploy the searched networks with high power-efficiency by selecting the most appropriate dataflow pattern layer-by-layer under limited resources. Evaluation results show that the proposed NAS framework can search DCNNs with the similar accuracy to the state-of-the-art ones as well as the low inference latency, and the proposed architecture provides both power-efficiency improvement and energy consumption saving.","PeriodicalId":186307,"journal":{"name":"2021 26th Asia and South Pacific Design Automation Conference (ASP-DAC)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Hardware-Aware NAS Framework with Layer Adaptive Scheduling on Embedded System\",\"authors\":\"Chuxi Li, Xiaoya Fan, Shengbing Zhang, Zhao Yang, Miao Wang, Danghui Wang, Meng Zhang\",\"doi\":\"10.1145/3394885.3431536\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Neural Architecture Search (NAS) has been proven to be an effective solution for building Deep Convolutional Neural Network (DCNN) models automatically. Subsequently, several hardware-aware NAS frameworks incorporate hardware latency into the search objectives to avoid the potential risk that the searched network cannot be deployed on target platforms. However, the mismatch between NAS and hardware persists due to the absent of rethinking the applicability of the searched network layer characteristics and hardware mapping. A convolution neural network layer can be executed on various dataflows of hardware with different performance, with which the characteristics of on-chip data using varies to fit the parallel structure. This mismatch also results in significant performance degradation for some maladaptive layers obtained from NAS, which might achieved a much better latency when the adopted dataflow changes. To address the issue that the network latency is insufficient to evaluate the deployment efficiency, this paper proposes a novel hardware-aware NAS framework in consideration of the adaptability between layers and dataflow patterns. Beside, we develop an optimized layer adaptive data scheduling strategy as well as a coarse-grained reconfigurable computing architecture so as to deploy the searched networks with high power-efficiency by selecting the most appropriate dataflow pattern layer-by-layer under limited resources. Evaluation results show that the proposed NAS framework can search DCNNs with the similar accuracy to the state-of-the-art ones as well as the low inference latency, and the proposed architecture provides both power-efficiency improvement and energy consumption saving.\",\"PeriodicalId\":186307,\"journal\":{\"name\":\"2021 26th Asia and South Pacific Design Automation Conference (ASP-DAC)\",\"volume\":\"23 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-01-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 26th Asia and South Pacific Design Automation Conference (ASP-DAC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3394885.3431536\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 26th Asia and South Pacific Design Automation Conference (ASP-DAC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3394885.3431536","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

摘要

神经结构搜索(NAS)已被证明是自动构建深度卷积神经网络(DCNN)模型的有效解决方案。随后,几个硬件感知的NAS框架将硬件延迟合并到搜索目标中,以避免搜索网络无法部署到目标平台上的潜在风险。然而,由于没有重新思考搜索到的网络层特征和硬件映射的适用性,NAS和硬件之间的不匹配仍然存在。一个卷积神经网络层可以在不同性能的硬件的各种数据流上执行,这些数据流使用的片上数据特征不同,以适应并行结构。这种不匹配还会导致从NAS获得的一些自适应不良层的显著性能下降,当采用的数据流发生变化时,这些层可能会获得更好的延迟。为了解决网络延迟不足以评估部署效率的问题,本文提出了一种考虑层间适应性和数据流模式的硬件感知NAS框架。此外,我们开发了一种优化的层自适应数据调度策略和粗粒度可重构计算架构,以便在有限资源下逐层选择最合适的数据流模式,以高能效部署搜索到的网络。评估结果表明,所提出的NAS框架能够以与最先进的NAS框架相似的精度搜索DCNNs,并且具有较低的推理延迟,所提出的架构既提高了功耗,又节省了能耗。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Hardware-Aware NAS Framework with Layer Adaptive Scheduling on Embedded System
Neural Architecture Search (NAS) has been proven to be an effective solution for building Deep Convolutional Neural Network (DCNN) models automatically. Subsequently, several hardware-aware NAS frameworks incorporate hardware latency into the search objectives to avoid the potential risk that the searched network cannot be deployed on target platforms. However, the mismatch between NAS and hardware persists due to the absent of rethinking the applicability of the searched network layer characteristics and hardware mapping. A convolution neural network layer can be executed on various dataflows of hardware with different performance, with which the characteristics of on-chip data using varies to fit the parallel structure. This mismatch also results in significant performance degradation for some maladaptive layers obtained from NAS, which might achieved a much better latency when the adopted dataflow changes. To address the issue that the network latency is insufficient to evaluate the deployment efficiency, this paper proposes a novel hardware-aware NAS framework in consideration of the adaptability between layers and dataflow patterns. Beside, we develop an optimized layer adaptive data scheduling strategy as well as a coarse-grained reconfigurable computing architecture so as to deploy the searched networks with high power-efficiency by selecting the most appropriate dataflow pattern layer-by-layer under limited resources. Evaluation results show that the proposed NAS framework can search DCNNs with the similar accuracy to the state-of-the-art ones as well as the low inference latency, and the proposed architecture provides both power-efficiency improvement and energy consumption saving.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信