Optimizing the distribution of tasks in Internet of Things using edge processing-based reinforcement learning

IF 4.3
Mohsen Latifi, Nahideh Derakhshanfard, Hossein Heydari
{"title":"Optimizing the distribution of tasks in Internet of Things using edge processing-based reinforcement learning","authors":"Mohsen Latifi,&nbsp;Nahideh Derakhshanfard,&nbsp;Hossein Heydari","doi":"10.1016/j.iswa.2025.200585","DOIUrl":null,"url":null,"abstract":"<div><div>As the Internet of Things expands, managing intelligent tasks in dynamic and heterogeneous environments has emerged as a primary challenge for processing-based systems at the network’s edge. A critical issue in this domain is the optimal allocation of tasks. A review of prior studies indicates that many existing approaches either focus on a single objective or suffer from instability and overestimation of decision values during the learning phase. This paper aims to bridge this by proposing an approach that utilizes reinforcement learning with a double Q-learning algorithm and a multi-objective reward function. Furthermore, the designed reward function facilitates intelligent decision-making under more realistic conditions by incorporating three essential factors: task execution delay, energy consumption of edge nodes, and computational load balancing across the nodes. The inputs for the proposed method encompass information such as task sizes, deadlines for each task, remaining energy in the nodes, computational power of the nodes, proximity to the edge nodes, and the current workload of each node. The method's output at any given moment is the decision regarding assigning any task to the most suitable node. Simulation results in a dynamic environment demonstrate that the proposed method outperforms traditional reinforcement learning algorithms. Specifically, the average task execution delay has been reduced by up to 23%, the energy consumption of the nodes has decreased by up to 18%, and load balancing among nodes has improved by up to 27%.</div></div>","PeriodicalId":100684,"journal":{"name":"Intelligent Systems with Applications","volume":"28 ","pages":"Article 200585"},"PeriodicalIF":4.3000,"publicationDate":"2025-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Intelligent Systems with Applications","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2667305325001115","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

As the Internet of Things expands, managing intelligent tasks in dynamic and heterogeneous environments has emerged as a primary challenge for processing-based systems at the network’s edge. A critical issue in this domain is the optimal allocation of tasks. A review of prior studies indicates that many existing approaches either focus on a single objective or suffer from instability and overestimation of decision values during the learning phase. This paper aims to bridge this by proposing an approach that utilizes reinforcement learning with a double Q-learning algorithm and a multi-objective reward function. Furthermore, the designed reward function facilitates intelligent decision-making under more realistic conditions by incorporating three essential factors: task execution delay, energy consumption of edge nodes, and computational load balancing across the nodes. The inputs for the proposed method encompass information such as task sizes, deadlines for each task, remaining energy in the nodes, computational power of the nodes, proximity to the edge nodes, and the current workload of each node. The method's output at any given moment is the decision regarding assigning any task to the most suitable node. Simulation results in a dynamic environment demonstrate that the proposed method outperforms traditional reinforcement learning algorithms. Specifically, the average task execution delay has been reduced by up to 23%, the energy consumption of the nodes has decreased by up to 18%, and load balancing among nodes has improved by up to 27%.
利用基于边缘处理的强化学习优化物联网中的任务分配
随着物联网的扩展,管理动态和异构环境中的智能任务已经成为网络边缘处理系统面临的主要挑战。该领域的一个关键问题是任务的最佳分配。回顾以往的研究表明,许多现有的方法要么专注于单一目标,要么在学习阶段存在不稳定性和高估决策值的问题。本文旨在通过提出一种利用双q学习算法和多目标奖励函数的强化学习方法来解决这一问题。此外,设计的奖励函数结合了任务执行延迟、边缘节点能耗和节点间计算负载均衡三个基本因素,促进了更现实条件下的智能决策。该方法的输入包括任务大小、每个任务的截止日期、节点的剩余能量、节点的计算能力、与边缘节点的接近程度以及每个节点的当前工作负载等信息。该方法在任何给定时刻的输出是关于将任何任务分配给最合适节点的决策。动态环境下的仿真结果表明,该方法优于传统的强化学习算法。具体来说,平均任务执行延迟降低了23%,节点能耗降低了18%,节点间负载均衡提高了27%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
5.60
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信