基于大时空图变压器模型的无线网络用户关联与功率分配联合优化

IF 0.5 Q4 TELECOMMUNICATIONS
D. S. Keerthi, P. Vishwanath, Kothuri Parashu Ramulu, Gopinath Anjinappa, Hirald Dwaraka Praveena
{"title":"基于大时空图变压器模型的无线网络用户关联与功率分配联合优化","authors":"D. S. Keerthi,&nbsp;P. Vishwanath,&nbsp;Kothuri Parashu Ramulu,&nbsp;Gopinath Anjinappa,&nbsp;Hirald Dwaraka Praveena","doi":"10.1002/itl2.70131","DOIUrl":null,"url":null,"abstract":"<div>\n \n <p>In this era, Wireless Communication Networks (WCNs) need dynamic and adaptive resource allocation approaches to handle user association and power allocation specifically under multi-connectivity and diverse traffic conditions. However, the conventional approaches struggle due to high computational cost, poor adaptability, and limited generalization. Therefore, this research proposes a large Spatio-Temporal Graph Transformer-based Reinforcement Learning (STGT-RL) model to jointly optimize user association and power allocation in large-scale WCNs. Initially, the network topology is designed using graph representations and incorporates a hybrid encoder that integrates Graph Transformers for spatial user-Base Station (BS) relationships and Spatio-Temporal Transformers for capturing time-varying traffic and channel states. Further, to ensure adaptive decision-making, a Transformer-RL policy agent is trained through a multi-objective reward function that assists in balancing throughput maximization and power efficiency. Furthermore, to enable stable policy learning, the model is initially trained using high-quality supervision from CRFSMA-generated labels, followed by reinforcement-based policy refinement. Hence, the experimental results are simulated on WCN environments to demonstrate that the proposed STGT-RL significantly outperforms baseline deep learning and heuristic-based methods in terms of throughput, energy efficiency, and fairness.</p>\n </div>","PeriodicalId":100725,"journal":{"name":"Internet Technology Letters","volume":"8 6","pages":""},"PeriodicalIF":0.5000,"publicationDate":"2025-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Joint Optimization of User Association and Power Allocation in Wireless Networks Using a Large Spatio-Temporal Graph Transformer Model\",\"authors\":\"D. S. Keerthi,&nbsp;P. Vishwanath,&nbsp;Kothuri Parashu Ramulu,&nbsp;Gopinath Anjinappa,&nbsp;Hirald Dwaraka Praveena\",\"doi\":\"10.1002/itl2.70131\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div>\\n \\n <p>In this era, Wireless Communication Networks (WCNs) need dynamic and adaptive resource allocation approaches to handle user association and power allocation specifically under multi-connectivity and diverse traffic conditions. However, the conventional approaches struggle due to high computational cost, poor adaptability, and limited generalization. Therefore, this research proposes a large Spatio-Temporal Graph Transformer-based Reinforcement Learning (STGT-RL) model to jointly optimize user association and power allocation in large-scale WCNs. Initially, the network topology is designed using graph representations and incorporates a hybrid encoder that integrates Graph Transformers for spatial user-Base Station (BS) relationships and Spatio-Temporal Transformers for capturing time-varying traffic and channel states. Further, to ensure adaptive decision-making, a Transformer-RL policy agent is trained through a multi-objective reward function that assists in balancing throughput maximization and power efficiency. Furthermore, to enable stable policy learning, the model is initially trained using high-quality supervision from CRFSMA-generated labels, followed by reinforcement-based policy refinement. Hence, the experimental results are simulated on WCN environments to demonstrate that the proposed STGT-RL significantly outperforms baseline deep learning and heuristic-based methods in terms of throughput, energy efficiency, and fairness.</p>\\n </div>\",\"PeriodicalId\":100725,\"journal\":{\"name\":\"Internet Technology Letters\",\"volume\":\"8 6\",\"pages\":\"\"},\"PeriodicalIF\":0.5000,\"publicationDate\":\"2025-09-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Internet Technology Letters\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/itl2.70131\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"TELECOMMUNICATIONS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Internet Technology Letters","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/itl2.70131","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"TELECOMMUNICATIONS","Score":null,"Total":0}
引用次数: 0

摘要

在这个时代,无线通信网络(WCNs)需要动态的、自适应的资源分配方法来处理多连接和不同流量条件下的用户关联和功率分配。然而,传统的方法由于计算成本高、适应性差、泛化有限等问题而陷入困境。因此,本研究提出了一种基于大型时空图变换的强化学习(STGT-RL)模型,用于联合优化大规模wcn中的用户关联和功率分配。最初,网络拓扑是使用图形表示设计的,并结合了一个混合编码器,该编码器集成了用于空间用户基站(BS)关系的图形转换器和用于捕获时变流量和信道状态的时空转换器。此外,为了确保自适应决策,Transformer-RL策略代理通过多目标奖励函数进行训练,该函数有助于平衡吞吐量最大化和功率效率。此外,为了实现稳定的策略学习,模型最初使用来自crfsma生成标签的高质量监督进行训练,然后进行基于强化的策略细化。因此,实验结果在WCN环境中进行了模拟,以证明所提出的STGT-RL在吞吐量、能源效率和公平性方面显著优于基线深度学习和基于启发式的方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Joint Optimization of User Association and Power Allocation in Wireless Networks Using a Large Spatio-Temporal Graph Transformer Model

In this era, Wireless Communication Networks (WCNs) need dynamic and adaptive resource allocation approaches to handle user association and power allocation specifically under multi-connectivity and diverse traffic conditions. However, the conventional approaches struggle due to high computational cost, poor adaptability, and limited generalization. Therefore, this research proposes a large Spatio-Temporal Graph Transformer-based Reinforcement Learning (STGT-RL) model to jointly optimize user association and power allocation in large-scale WCNs. Initially, the network topology is designed using graph representations and incorporates a hybrid encoder that integrates Graph Transformers for spatial user-Base Station (BS) relationships and Spatio-Temporal Transformers for capturing time-varying traffic and channel states. Further, to ensure adaptive decision-making, a Transformer-RL policy agent is trained through a multi-objective reward function that assists in balancing throughput maximization and power efficiency. Furthermore, to enable stable policy learning, the model is initially trained using high-quality supervision from CRFSMA-generated labels, followed by reinforcement-based policy refinement. Hence, the experimental results are simulated on WCN environments to demonstrate that the proposed STGT-RL significantly outperforms baseline deep learning and heuristic-based methods in terms of throughput, energy efficiency, and fairness.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
3.10
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信