A self-supervised deep reinforcement learning for Zero-Shot Task scheduling in mobile edge computing environments

IF 4.8 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS
Parisa Khoshvaght , Amir Haider , Amir Masoud Rahmani , Shakiba Rajabi , Farhad Soleimanian Gharehchopogh , Jan Lansky , Mehdi Hosseinzadeh
{"title":"A self-supervised deep reinforcement learning for Zero-Shot Task scheduling in mobile edge computing environments","authors":"Parisa Khoshvaght ,&nbsp;Amir Haider ,&nbsp;Amir Masoud Rahmani ,&nbsp;Shakiba Rajabi ,&nbsp;Farhad Soleimanian Gharehchopogh ,&nbsp;Jan Lansky ,&nbsp;Mehdi Hosseinzadeh","doi":"10.1016/j.adhoc.2025.103977","DOIUrl":null,"url":null,"abstract":"<div><div>The rising need for swift response times makes it essential to use computing resources and network capacities efficiently at the edges of the networks. Mobile Edge Computing (MEC) handles this by processing user data near where it is generated rather than always relying on remote cloud centres. Yet, scheduling tasks under these conditions can be difficult because workloads shift, resources vary, and network performance is unstable. Traditional scheduling strategies often underperform in such rapidly changing settings, and even Deep Reinforcement Learning (DRL) solutions usually require extensive retraining whenever they encounter unfamiliar tasks. This paper proposes a self-supervised DRL framework for zero-shot task scheduling in MEC environments. The system integrates self-supervised learning to generate task embeddings, enabling the model to classify tasks into clusters based on resource requirements and execution complexity. A Soft Actor-Critic (SAC)-based scheduler then optimally assigns tasks to MEC nodes while dynamically adapting to network conditions. The training process combines contrastive learning for task representation and policy optimization to enhance scheduling decisions. Simulations demonstrate that the proposed approach reduces task completion time by up to 22 %, lowers energy consumption by 29 %, and improves latency by 18 % over baseline methods.</div></div>","PeriodicalId":55555,"journal":{"name":"Ad Hoc Networks","volume":"178 ","pages":"Article 103977"},"PeriodicalIF":4.8000,"publicationDate":"2025-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Ad Hoc Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1570870525002252","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

The rising need for swift response times makes it essential to use computing resources and network capacities efficiently at the edges of the networks. Mobile Edge Computing (MEC) handles this by processing user data near where it is generated rather than always relying on remote cloud centres. Yet, scheduling tasks under these conditions can be difficult because workloads shift, resources vary, and network performance is unstable. Traditional scheduling strategies often underperform in such rapidly changing settings, and even Deep Reinforcement Learning (DRL) solutions usually require extensive retraining whenever they encounter unfamiliar tasks. This paper proposes a self-supervised DRL framework for zero-shot task scheduling in MEC environments. The system integrates self-supervised learning to generate task embeddings, enabling the model to classify tasks into clusters based on resource requirements and execution complexity. A Soft Actor-Critic (SAC)-based scheduler then optimally assigns tasks to MEC nodes while dynamically adapting to network conditions. The training process combines contrastive learning for task representation and policy optimization to enhance scheduling decisions. Simulations demonstrate that the proposed approach reduces task completion time by up to 22 %, lowers energy consumption by 29 %, and improves latency by 18 % over baseline methods.
移动边缘计算环境下零间隔任务调度的自监督深度强化学习
对快速响应时间的日益增长的需求使得在网络边缘有效地使用计算资源和网络容量变得至关重要。移动边缘计算(MEC)通过在生成数据的地方附近处理用户数据来解决这个问题,而不是总是依赖于远程云中心。然而,在这些条件下调度任务可能会很困难,因为工作负载会变化,资源会变化,网络性能也不稳定。在这种快速变化的环境中,传统的调度策略往往表现不佳,即使是深度强化学习(DRL)解决方案,每当遇到不熟悉的任务时,通常也需要进行大量的再培训。针对MEC环境下的零间隔任务调度问题,提出了一种自监督DRL框架。系统集成自监督学习生成任务嵌入,使模型能够根据资源需求和执行复杂度将任务分类成集群。然后,基于软角色评论家(SAC)的调度器在动态适应网络条件的同时,将任务最佳地分配给MEC节点。训练过程结合了任务表示的对比学习和策略优化来增强调度决策。仿真表明,与基线方法相比,该方法可将任务完成时间缩短22%,将能耗降低29%,并将延迟提高18%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Ad Hoc Networks
Ad Hoc Networks 工程技术-电信学
CiteScore
10.20
自引率
4.20%
发文量
131
审稿时长
4.8 months
期刊介绍: The Ad Hoc Networks is an international and archival journal providing a publication vehicle for complete coverage of all topics of interest to those involved in ad hoc and sensor networking areas. The Ad Hoc Networks considers original, high quality and unpublished contributions addressing all aspects of ad hoc and sensor networks. Specific areas of interest include, but are not limited to: Mobile and Wireless Ad Hoc Networks Sensor Networks Wireless Local and Personal Area Networks Home Networks Ad Hoc Networks of Autonomous Intelligent Systems Novel Architectures for Ad Hoc and Sensor Networks Self-organizing Network Architectures and Protocols Transport Layer Protocols Routing protocols (unicast, multicast, geocast, etc.) Media Access Control Techniques Error Control Schemes Power-Aware, Low-Power and Energy-Efficient Designs Synchronization and Scheduling Issues Mobility Management Mobility-Tolerant Communication Protocols Location Tracking and Location-based Services Resource and Information Management Security and Fault-Tolerance Issues Hardware and Software Platforms, Systems, and Testbeds Experimental and Prototype Results Quality-of-Service Issues Cross-Layer Interactions Scalability Issues Performance Analysis and Simulation of Protocols.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信