Reg-Tune: A Regression-Focused Fine-Tuning Approach for Profiling Low Energy Consumption and Latency

IF 2.8 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE
A. Mazumder, Farshad Safavi, Maryam Rahnemoonfar, T. Mohsenin
{"title":"Reg-Tune: A Regression-Focused Fine-Tuning Approach for Profiling Low Energy Consumption and Latency","authors":"A. Mazumder, Farshad Safavi, Maryam Rahnemoonfar, T. Mohsenin","doi":"10.1145/3623380","DOIUrl":null,"url":null,"abstract":"Fine-tuning deep neural networks (DNNs) is pivotal for creating inference modules that can be suitably imported to edge or FPGA (Field Programmable Gate Arrays) platforms. Traditionally, exploration of different parameters throughout the layers of DNNs has been done using grid search and other brute force techniques. Though these methods lead to the optimal choice of network parameters, the search process can be very time-consuming and may not consider deployment constraints across different target platforms. This work addresses this problem by proposing Reg-Tune, a regression-based profiling approach to quickly determine the trend of different metrics in relation to hardware deployment of neural networks on tinyML platforms like FPGAs and edge devices. We start by training a handful of configurations belonging to different combinations of \\(\\mathcal {NN}\\scriptstyle \\langle q (quantization),\\,s (scaling)\\rangle \\displaystyle \\) or \\(\\mathcal {NN}\\scriptstyle \\langle r (resolution),\\,s\\rangle \\displaystyle \\) workloads to generate the accuracy values respectively for their corresponding application. Next, we deploy these configurations on the target device to generate energy/latency values. According to our hypothesis, the most energy-efficient configuration suitable for deployment on the target device is a function of the variables q, r, and s. Finally, these trained and deployed configurations and their related results are used as data points for polynomial regression with the variables q, r, and s to realize the trend for accuracy/energy/latency on the target device. Our setup allows us to choose the near-optimal energy-consuming or latency-driven configuration for the desired accuracy from the contour profiles of energy/latency across different tinyML device platforms. To this extent, we demonstrate the profiling process for three different case studies and across two platforms for energy and latency fine-tuning. Our approach results in at least 5.7 × better energy efficiency when compared to recent implementations for human activity recognition on FPGA and 74.6% reduction in latency for semantic segmentation of aerial imagery on edge devices compared to baseline deployments.","PeriodicalId":50914,"journal":{"name":"ACM Transactions on Embedded Computing Systems","volume":" ","pages":""},"PeriodicalIF":2.8000,"publicationDate":"2023-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Transactions on Embedded Computing Systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1145/3623380","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0

Abstract

Fine-tuning deep neural networks (DNNs) is pivotal for creating inference modules that can be suitably imported to edge or FPGA (Field Programmable Gate Arrays) platforms. Traditionally, exploration of different parameters throughout the layers of DNNs has been done using grid search and other brute force techniques. Though these methods lead to the optimal choice of network parameters, the search process can be very time-consuming and may not consider deployment constraints across different target platforms. This work addresses this problem by proposing Reg-Tune, a regression-based profiling approach to quickly determine the trend of different metrics in relation to hardware deployment of neural networks on tinyML platforms like FPGAs and edge devices. We start by training a handful of configurations belonging to different combinations of \(\mathcal {NN}\scriptstyle \langle q (quantization),\,s (scaling)\rangle \displaystyle \) or \(\mathcal {NN}\scriptstyle \langle r (resolution),\,s\rangle \displaystyle \) workloads to generate the accuracy values respectively for their corresponding application. Next, we deploy these configurations on the target device to generate energy/latency values. According to our hypothesis, the most energy-efficient configuration suitable for deployment on the target device is a function of the variables q, r, and s. Finally, these trained and deployed configurations and their related results are used as data points for polynomial regression with the variables q, r, and s to realize the trend for accuracy/energy/latency on the target device. Our setup allows us to choose the near-optimal energy-consuming or latency-driven configuration for the desired accuracy from the contour profiles of energy/latency across different tinyML device platforms. To this extent, we demonstrate the profiling process for three different case studies and across two platforms for energy and latency fine-tuning. Our approach results in at least 5.7 × better energy efficiency when compared to recent implementations for human activity recognition on FPGA and 74.6% reduction in latency for semantic segmentation of aerial imagery on edge devices compared to baseline deployments.
Reg-Tune:一种以回归为中心的微调方法,用于分析低能耗和延迟
微调深度神经网络(DNN)对于创建可以适当导入边缘或FPGA(现场可编程门阵列)平台的推理模块至关重要。传统上,在DNN的各个层中探索不同的参数是使用网格搜索和其他暴力技术完成的。尽管这些方法可以优化网络参数的选择,但搜索过程可能非常耗时,并且可能不考虑跨不同目标平台的部署约束。这项工作通过提出Reg-Tune来解决这个问题,Reg-Tune是一种基于回归的分析方法,可以快速确定与FPGA和边缘设备等tinyML平台上神经网络硬件部署相关的不同指标的趋势。我们首先训练属于\(\mathcal{NN}\scriptstyle\langleq(量化),\,s(缩放)\rangle\displaystyle\)或\。接下来,我们在目标设备上部署这些配置,以生成能量/延迟值。根据我们的假设,适合部署在目标设备上的最节能配置是变量q、r和s的函数。最后,这些训练和部署的配置及其相关结果被用作变量q、r和s的多项式回归的数据点,以实现目标设备上精度/能量/延迟的趋势。我们的设置允许我们从不同tinyML设备平台的能量/延迟轮廓中选择接近最佳的能耗或延迟驱动配置,以获得所需的精度。在这种程度上,我们展示了三个不同案例研究的分析过程,并跨两个平台进行能量和延迟微调。与最近在FPGA上实现的人类活动识别相比,我们的方法至少提高了5.7倍的能效,与基线部署相比,边缘设备上航空图像语义分割的延迟减少了74.6%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
ACM Transactions on Embedded Computing Systems
ACM Transactions on Embedded Computing Systems 工程技术-计算机:软件工程
CiteScore
3.70
自引率
0.00%
发文量
138
审稿时长
6 months
期刊介绍: The design of embedded computing systems, both the software and hardware, increasingly relies on sophisticated algorithms, analytical models, and methodologies. ACM Transactions on Embedded Computing Systems (TECS) aims to present the leading work relating to the analysis, design, behavior, and experience with embedded computing systems.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信