A. Mazumder, Farshad Safavi, Maryam Rahnemoonfar, T. Mohsenin
{"title":"Reg-Tune: A Regression-Focused Fine-Tuning Approach for Profiling Low Energy Consumption and Latency","authors":"A. Mazumder, Farshad Safavi, Maryam Rahnemoonfar, T. Mohsenin","doi":"10.1145/3623380","DOIUrl":null,"url":null,"abstract":"Fine-tuning deep neural networks (DNNs) is pivotal for creating inference modules that can be suitably imported to edge or FPGA (Field Programmable Gate Arrays) platforms. Traditionally, exploration of different parameters throughout the layers of DNNs has been done using grid search and other brute force techniques. Though these methods lead to the optimal choice of network parameters, the search process can be very time-consuming and may not consider deployment constraints across different target platforms. This work addresses this problem by proposing Reg-Tune, a regression-based profiling approach to quickly determine the trend of different metrics in relation to hardware deployment of neural networks on tinyML platforms like FPGAs and edge devices. We start by training a handful of configurations belonging to different combinations of \\(\\mathcal {NN}\\scriptstyle \\langle q (quantization),\\,s (scaling)\\rangle \\displaystyle \\) or \\(\\mathcal {NN}\\scriptstyle \\langle r (resolution),\\,s\\rangle \\displaystyle \\) workloads to generate the accuracy values respectively for their corresponding application. Next, we deploy these configurations on the target device to generate energy/latency values. According to our hypothesis, the most energy-efficient configuration suitable for deployment on the target device is a function of the variables q, r, and s. Finally, these trained and deployed configurations and their related results are used as data points for polynomial regression with the variables q, r, and s to realize the trend for accuracy/energy/latency on the target device. Our setup allows us to choose the near-optimal energy-consuming or latency-driven configuration for the desired accuracy from the contour profiles of energy/latency across different tinyML device platforms. To this extent, we demonstrate the profiling process for three different case studies and across two platforms for energy and latency fine-tuning. Our approach results in at least 5.7 × better energy efficiency when compared to recent implementations for human activity recognition on FPGA and 74.6% reduction in latency for semantic segmentation of aerial imagery on edge devices compared to baseline deployments.","PeriodicalId":50914,"journal":{"name":"ACM Transactions on Embedded Computing Systems","volume":" ","pages":""},"PeriodicalIF":2.8000,"publicationDate":"2023-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Transactions on Embedded Computing Systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1145/3623380","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0
Abstract
Fine-tuning deep neural networks (DNNs) is pivotal for creating inference modules that can be suitably imported to edge or FPGA (Field Programmable Gate Arrays) platforms. Traditionally, exploration of different parameters throughout the layers of DNNs has been done using grid search and other brute force techniques. Though these methods lead to the optimal choice of network parameters, the search process can be very time-consuming and may not consider deployment constraints across different target platforms. This work addresses this problem by proposing Reg-Tune, a regression-based profiling approach to quickly determine the trend of different metrics in relation to hardware deployment of neural networks on tinyML platforms like FPGAs and edge devices. We start by training a handful of configurations belonging to different combinations of \(\mathcal {NN}\scriptstyle \langle q (quantization),\,s (scaling)\rangle \displaystyle \) or \(\mathcal {NN}\scriptstyle \langle r (resolution),\,s\rangle \displaystyle \) workloads to generate the accuracy values respectively for their corresponding application. Next, we deploy these configurations on the target device to generate energy/latency values. According to our hypothesis, the most energy-efficient configuration suitable for deployment on the target device is a function of the variables q, r, and s. Finally, these trained and deployed configurations and their related results are used as data points for polynomial regression with the variables q, r, and s to realize the trend for accuracy/energy/latency on the target device. Our setup allows us to choose the near-optimal energy-consuming or latency-driven configuration for the desired accuracy from the contour profiles of energy/latency across different tinyML device platforms. To this extent, we demonstrate the profiling process for three different case studies and across two platforms for energy and latency fine-tuning. Our approach results in at least 5.7 × better energy efficiency when compared to recent implementations for human activity recognition on FPGA and 74.6% reduction in latency for semantic segmentation of aerial imagery on edge devices compared to baseline deployments.
期刊介绍:
The design of embedded computing systems, both the software and hardware, increasingly relies on sophisticated algorithms, analytical models, and methodologies. ACM Transactions on Embedded Computing Systems (TECS) aims to present the leading work relating to the analysis, design, behavior, and experience with embedded computing systems.