基于 DRL 的高能效基带功能部署,用于面向服务的开放式 RAN

IF 5.3 2区 计算机科学 Q1 TELECOMMUNICATIONS
Haiyuan Li;Amin Emami;Karcius Day R. Assis;Antonis Vafeas;Ruizhi Yang;Reza Nejabati;Shuangyi Yan;Dimitra Simeonidou
{"title":"基于 DRL 的高能效基带功能部署,用于面向服务的开放式 RAN","authors":"Haiyuan Li;Amin Emami;Karcius Day R. Assis;Antonis Vafeas;Ruizhi Yang;Reza Nejabati;Shuangyi Yan;Dimitra Simeonidou","doi":"10.1109/TGCN.2023.3321195","DOIUrl":null,"url":null,"abstract":"Open Radio Access Network (Open RAN) has gained tremendous attention from industry and academia with decentralized baseband functions across multiple processing units located at different places. However, the ever-expanding scope of RANs, along with fluctuations in resource utilization across different locations and timeframes, necessitates the implementation of robust function management policies to minimize network energy consumption. Most recently developed strategies neglected the activation time and the required energy for the server activation process, while this process could offset the potential energy savings gained from server hibernation. Furthermore, user plane functions, which can be deployed on edge computing servers to provide low-latency services, have not been sufficiently considered. In this paper, a multi-agent deep reinforcement learning (DRL) based function deployment algorithm, coupled with a heuristic method, has been developed to minimize energy consumption while fulfilling multiple requests and adhering to latency and resource constraints. In an 8-MEC network, the DRL-based solution approaches the performance of the benchmark while offering up to 51% energy savings compared to existing approaches. In a larger network of 14-MEC, it maintains a 38% energy-saving advantage and ensures real-time response capabilities. Furthermore, this paper prototypes an Open RAN testbed to verify the feasibility of the proposed solution.","PeriodicalId":13052,"journal":{"name":"IEEE Transactions on Green Communications and Networking","volume":null,"pages":null},"PeriodicalIF":5.3000,"publicationDate":"2023-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"DRL-Based Energy-Efficient Baseband Function Deployments for Service-Oriented Open RAN\",\"authors\":\"Haiyuan Li;Amin Emami;Karcius Day R. Assis;Antonis Vafeas;Ruizhi Yang;Reza Nejabati;Shuangyi Yan;Dimitra Simeonidou\",\"doi\":\"10.1109/TGCN.2023.3321195\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Open Radio Access Network (Open RAN) has gained tremendous attention from industry and academia with decentralized baseband functions across multiple processing units located at different places. However, the ever-expanding scope of RANs, along with fluctuations in resource utilization across different locations and timeframes, necessitates the implementation of robust function management policies to minimize network energy consumption. Most recently developed strategies neglected the activation time and the required energy for the server activation process, while this process could offset the potential energy savings gained from server hibernation. Furthermore, user plane functions, which can be deployed on edge computing servers to provide low-latency services, have not been sufficiently considered. In this paper, a multi-agent deep reinforcement learning (DRL) based function deployment algorithm, coupled with a heuristic method, has been developed to minimize energy consumption while fulfilling multiple requests and adhering to latency and resource constraints. In an 8-MEC network, the DRL-based solution approaches the performance of the benchmark while offering up to 51% energy savings compared to existing approaches. In a larger network of 14-MEC, it maintains a 38% energy-saving advantage and ensures real-time response capabilities. Furthermore, this paper prototypes an Open RAN testbed to verify the feasibility of the proposed solution.\",\"PeriodicalId\":13052,\"journal\":{\"name\":\"IEEE Transactions on Green Communications and Networking\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":5.3000,\"publicationDate\":\"2023-10-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Green Communications and Networking\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10268589/\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"TELECOMMUNICATIONS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Green Communications and Networking","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10268589/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"TELECOMMUNICATIONS","Score":null,"Total":0}
引用次数: 0

摘要

开放式无线接入网络(Open RAN)的基带功能分散在不同地点的多个处理单元上,因此受到业界和学术界的极大关注。然而,随着 RAN 范围的不断扩大,以及不同地点和时间段资源利用率的波动,有必要实施稳健的功能管理策略,以最大限度地降低网络能耗。最近开发的大多数策略都忽略了服务器激活过程的激活时间和所需能量,而这一过程可能会抵消服务器休眠带来的潜在节能效果。此外,可部署在边缘计算服务器上以提供低延迟服务的用户平面功能也未得到充分考虑。本文开发了一种基于多代理深度强化学习(DRL)的功能部署算法,该算法与启发式方法相结合,可在满足多个请求并遵守延迟和资源限制的同时最大限度地降低能耗。在 8-MEC 网络中,基于 DRL 的解决方案接近基准性能,同时与现有方法相比可节省高达 51% 的能源。在 14-MEC 的更大网络中,它保持了 38% 的节能优势,并确保了实时响应能力。此外,本文还制作了一个开放 RAN 测试平台原型,以验证所提解决方案的可行性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
DRL-Based Energy-Efficient Baseband Function Deployments for Service-Oriented Open RAN
Open Radio Access Network (Open RAN) has gained tremendous attention from industry and academia with decentralized baseband functions across multiple processing units located at different places. However, the ever-expanding scope of RANs, along with fluctuations in resource utilization across different locations and timeframes, necessitates the implementation of robust function management policies to minimize network energy consumption. Most recently developed strategies neglected the activation time and the required energy for the server activation process, while this process could offset the potential energy savings gained from server hibernation. Furthermore, user plane functions, which can be deployed on edge computing servers to provide low-latency services, have not been sufficiently considered. In this paper, a multi-agent deep reinforcement learning (DRL) based function deployment algorithm, coupled with a heuristic method, has been developed to minimize energy consumption while fulfilling multiple requests and adhering to latency and resource constraints. In an 8-MEC network, the DRL-based solution approaches the performance of the benchmark while offering up to 51% energy savings compared to existing approaches. In a larger network of 14-MEC, it maintains a 38% energy-saving advantage and ensures real-time response capabilities. Furthermore, this paper prototypes an Open RAN testbed to verify the feasibility of the proposed solution.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
IEEE Transactions on Green Communications and Networking
IEEE Transactions on Green Communications and Networking Computer Science-Computer Networks and Communications
CiteScore
9.30
自引率
6.20%
发文量
181
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信