A Deep Reinforcement Learning Based Dynamic Resource Allocation Approach in Satellite Systems

Junyang Zhou;Yunxiao Wan;Yurui Li;Jian Wang
{"title":"A Deep Reinforcement Learning Based Dynamic Resource Allocation Approach in Satellite Systems","authors":"Junyang Zhou;Yunxiao Wan;Yurui Li;Jian Wang","doi":"10.23919/JCIN.2025.11083701","DOIUrl":null,"url":null,"abstract":"Efficient resource allocation in space information networks (SINs) is crucial for providing global connectivity but is challenged by constrained satellite resources and dynamic user demand. While dynamic channel allocation techniques exist, they often fail to handle complex, multi-faceted resource constraints in practical scenarios. To address this issue, this paper introduces a deep reinforcement learning based dynamic resource allocation (DDRA) algorithm. The DDRA formulates the allocation problem as a Markov decision process and employs deep Q-network (DQN) to learn an optimal policy for assigning channel, power, and traffic resources. We developed a simulation environment in ns-3 to evaluate the DDRA algorithm against traditional fixed and greedy random allocation methods. The results demonstrate that the DDRA algorithm significantly outperforms these baselines, achieving substantially lower service blocking rates and higher traffic satisfaction rates across various user demand scenarios. This work validates the potential of DRL to create intelligent, adaptive resource management systems for next-generation satellite networks.","PeriodicalId":100766,"journal":{"name":"Journal of Communications and Information Networks","volume":"10 2","pages":"183-190"},"PeriodicalIF":0.0000,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Communications and Information Networks","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/11083701/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Efficient resource allocation in space information networks (SINs) is crucial for providing global connectivity but is challenged by constrained satellite resources and dynamic user demand. While dynamic channel allocation techniques exist, they often fail to handle complex, multi-faceted resource constraints in practical scenarios. To address this issue, this paper introduces a deep reinforcement learning based dynamic resource allocation (DDRA) algorithm. The DDRA formulates the allocation problem as a Markov decision process and employs deep Q-network (DQN) to learn an optimal policy for assigning channel, power, and traffic resources. We developed a simulation environment in ns-3 to evaluate the DDRA algorithm against traditional fixed and greedy random allocation methods. The results demonstrate that the DDRA algorithm significantly outperforms these baselines, achieving substantially lower service blocking rates and higher traffic satisfaction rates across various user demand scenarios. This work validates the potential of DRL to create intelligent, adaptive resource management systems for next-generation satellite networks.
基于深度强化学习的卫星系统动态资源分配方法
空间信息网络的有效资源分配对于提供全球连通性至关重要,但受到卫星资源受限和用户动态需求的挑战。虽然存在动态信道分配技术,但它们往往无法处理实际场景中复杂的、多方面的资源约束。为了解决这一问题,本文引入了一种基于深度强化学习的动态资源分配(DDRA)算法。DDRA将分配问题描述为马尔可夫决策过程,并采用深度q网络(deep Q-network, DQN)学习最优的信道、功率和流量资源分配策略。我们在ns-3中开发了一个仿真环境,对DDRA算法与传统的固定分配和贪婪随机分配方法进行了比较。结果表明,DDRA算法显著优于这些基线,在各种用户需求场景中实现了更低的服务阻塞率和更高的流量满意度。这项工作验证了DRL为下一代卫星网络创建智能、自适应资源管理系统的潜力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信