Gorka Nieto , Neco Villegas , Luis Diez , Idoia de la Iglesia , Unai Lopez-Novoa , Cristina Perfecto , Ramón Agüero
{"title":"Comparing Control Theory and Deep Reinforcement Learning techniques for decentralized task offloading in the edge–cloud continuum","authors":"Gorka Nieto , Neco Villegas , Luis Diez , Idoia de la Iglesia , Unai Lopez-Novoa , Cristina Perfecto , Ramón Agüero","doi":"10.1016/j.simpat.2025.103170","DOIUrl":null,"url":null,"abstract":"<div><div>With the increasingly demanding requirements of Internet-of-Things (IoT) applications in terms of latency, energy efficiency, and computational resources, among others, task offloading has become crucial to optimize performance across edge and cloud infrastructures. Thus, optimizing the offloading to reduce latency as well as energy consumption and, ultimately, to guarantee appropriate service levels and enhance performance has become an important area of research. There are many approaches to guide the offloading of tasks in a distributed environment, and, in this work, we present a comprehensive comparison of three of them: A Control Theory (CT) Lyapunov optimization method, 3 Deep Reinforcement Learning (DRL) based strategies and traditional solutions, like Round-Robin or static schedulers. This comparison has been conducted using <em>ITSASO</em>, an in-house developed simulation platform for evaluating decentralized task offloading strategies in a three-layer computing hierarchy comprising IoT, fog, and cloud nodes. The platform models service generation in the IoT layer using a configurable distribution, enabling each IoT node to decide whether to autonomously execute tasks (locally), offload them to the fog layer, or send them to the cloud server. Our approach aims to minimize the energy consumption of devices while meeting tasks’ latency requirements. Our simulation results reveal that Lyapunov optimization excels in static environments, while DRL approaches prove to be more effective in dynamic settings, by better adapting to changing requirements and workloads. This study offers an analysis of the trade-offs between these solutions, highlighting the scenarios in which each scheduling approach is most suitable, thereby contributing valuable theoretical insights into the effectiveness of various offloading strategies in different environments. The source code of <em>ITSASO</em> is publicly available.</div></div>","PeriodicalId":49518,"journal":{"name":"Simulation Modelling Practice and Theory","volume":"144 ","pages":"Article 103170"},"PeriodicalIF":3.5000,"publicationDate":"2025-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Simulation Modelling Practice and Theory","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1569190X25001054","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 0
Abstract
With the increasingly demanding requirements of Internet-of-Things (IoT) applications in terms of latency, energy efficiency, and computational resources, among others, task offloading has become crucial to optimize performance across edge and cloud infrastructures. Thus, optimizing the offloading to reduce latency as well as energy consumption and, ultimately, to guarantee appropriate service levels and enhance performance has become an important area of research. There are many approaches to guide the offloading of tasks in a distributed environment, and, in this work, we present a comprehensive comparison of three of them: A Control Theory (CT) Lyapunov optimization method, 3 Deep Reinforcement Learning (DRL) based strategies and traditional solutions, like Round-Robin or static schedulers. This comparison has been conducted using ITSASO, an in-house developed simulation platform for evaluating decentralized task offloading strategies in a three-layer computing hierarchy comprising IoT, fog, and cloud nodes. The platform models service generation in the IoT layer using a configurable distribution, enabling each IoT node to decide whether to autonomously execute tasks (locally), offload them to the fog layer, or send them to the cloud server. Our approach aims to minimize the energy consumption of devices while meeting tasks’ latency requirements. Our simulation results reveal that Lyapunov optimization excels in static environments, while DRL approaches prove to be more effective in dynamic settings, by better adapting to changing requirements and workloads. This study offers an analysis of the trade-offs between these solutions, highlighting the scenarios in which each scheduling approach is most suitable, thereby contributing valuable theoretical insights into the effectiveness of various offloading strategies in different environments. The source code of ITSASO is publicly available.
期刊介绍:
The journal Simulation Modelling Practice and Theory provides a forum for original, high-quality papers dealing with any aspect of systems simulation and modelling.
The journal aims at being a reference and a powerful tool to all those professionally active and/or interested in the methods and applications of simulation. Submitted papers will be peer reviewed and must significantly contribute to modelling and simulation in general or use modelling and simulation in application areas.
Paper submission is solicited on:
• theoretical aspects of modelling and simulation including formal modelling, model-checking, random number generators, sensitivity analysis, variance reduction techniques, experimental design, meta-modelling, methods and algorithms for validation and verification, selection and comparison procedures etc.;
• methodology and application of modelling and simulation in any area, including computer systems, networks, real-time and embedded systems, mobile and intelligent agents, manufacturing and transportation systems, management, engineering, biomedical engineering, economics, ecology and environment, education, transaction handling, etc.;
• simulation languages and environments including those, specific to distributed computing, grid computing, high performance computers or computer networks, etc.;
• distributed and real-time simulation, simulation interoperability;
• tools for high performance computing simulation, including dedicated architectures and parallel computing.