J. Vicenzi, Guilherme Korol, M. Jordan, Wagner Ourique de Morais, Hazem Ali, Edison Pignaton De Freitas, M. B. Rutzig, A. C. S. Beck
{"title":"Dynamic Offloading for Improved Performance and Energy Efficiency in Heterogeneous IoT-Edge-Cloud Continuum","authors":"J. Vicenzi, Guilherme Korol, M. Jordan, Wagner Ourique de Morais, Hazem Ali, Edison Pignaton De Freitas, M. B. Rutzig, A. C. S. Beck","doi":"10.1109/ISVLSI59464.2023.10238564","DOIUrl":null,"url":null,"abstract":"While machine learning applications in IoT devices are getting more widespread, the computational and power limitations of these devices pose a great challenge. To handle this increasing computational burden, edge, and cloud solutions emerge as a means to offload computation to more powerful devices. However, the unstable nature of network connections constantly changes the communication costs, making the offload process (i.e., when and where to transfer data) a dynamic trade-off. In this work, we propose DECOS: a framework to automatically select at run-time the best offloading solution with minimum latency based on the computational capabilities of devices and network status at a given moment. We use heterogeneous devices for edge and Cloud nodes to evaluate the framework’s performance using MobileNetV1 CNN and network traffic data from a real-world 4G bandwidth dataset. DECOS effectively selects the best processing node to maintain the minimum possible latency, reducing it up to 29% compared to Cloud-exclusive processing while reducing the energy consumption by 1.9$\\times$ compared to IoT-exclusive execution.","PeriodicalId":199371,"journal":{"name":"2023 IEEE Computer Society Annual Symposium on VLSI (ISVLSI)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE Computer Society Annual Symposium on VLSI (ISVLSI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISVLSI59464.2023.10238564","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
While machine learning applications in IoT devices are getting more widespread, the computational and power limitations of these devices pose a great challenge. To handle this increasing computational burden, edge, and cloud solutions emerge as a means to offload computation to more powerful devices. However, the unstable nature of network connections constantly changes the communication costs, making the offload process (i.e., when and where to transfer data) a dynamic trade-off. In this work, we propose DECOS: a framework to automatically select at run-time the best offloading solution with minimum latency based on the computational capabilities of devices and network status at a given moment. We use heterogeneous devices for edge and Cloud nodes to evaluate the framework’s performance using MobileNetV1 CNN and network traffic data from a real-world 4G bandwidth dataset. DECOS effectively selects the best processing node to maintain the minimum possible latency, reducing it up to 29% compared to Cloud-exclusive processing while reducing the energy consumption by 1.9$\times$ compared to IoT-exclusive execution.