Junchao Yang , Hui Zhang , Wenxin Jiao , Zhiwei Guo , Fayez Alqahtani , Amr Tolba , Yu Shen
{"title":"基于深度强化学习的6G网络可扩展VR视频分层与功率分配联合优化","authors":"Junchao Yang , Hui Zhang , Wenxin Jiao , Zhiwei Guo , Fayez Alqahtani , Amr Tolba , Yu Shen","doi":"10.1016/j.sysarc.2025.103401","DOIUrl":null,"url":null,"abstract":"<div><div>With the advancement and application of virtual reality (VR) technology, there is a growing demand for network bandwidth and computational capabilities. To address the challenges of high bandwidth requirements, low latency demands, and intensive computational tasks in VR video transmission, this paper proposes a joint optimization method for layering and power allocation based on Deep Reinforcement Learning (DRL). The method focuses on the transmission of scalable VR videos in 6G networks, utilizing DRL to achieve a cloud-edge-end collaborative transmission framework, where Tile-based scalable VR video is proactively cached to the MEC nodes, and Asynchronous Advantage Actor-Critic (A3C) algorithm is adopted to jointly optimize dual-connected link resources, edge computing resources, and user terminal computing resources. Through simulation experiments, the effectiveness of the proposed algorithms was validated. The results show that compared to baseline algorithms and state-of-the-art methods, the proposed A3C algorithm effectively improves the average quality of experience (QoE) for VR users and maintains low latency under various sub-6G and millimeter wave link capacities. Furthermore, with increased Mobile Edge Computing (MEC) computing power and User Equipment (UE) computing capabilities, the proposed method can further improve QoE and reduce latency.</div></div>","PeriodicalId":50027,"journal":{"name":"Journal of Systems Architecture","volume":"162 ","pages":"Article 103401"},"PeriodicalIF":3.7000,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Joint optimization of layering and power allocation for scalable VR video in 6G networks based on Deep Reinforcement Learning\",\"authors\":\"Junchao Yang , Hui Zhang , Wenxin Jiao , Zhiwei Guo , Fayez Alqahtani , Amr Tolba , Yu Shen\",\"doi\":\"10.1016/j.sysarc.2025.103401\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>With the advancement and application of virtual reality (VR) technology, there is a growing demand for network bandwidth and computational capabilities. To address the challenges of high bandwidth requirements, low latency demands, and intensive computational tasks in VR video transmission, this paper proposes a joint optimization method for layering and power allocation based on Deep Reinforcement Learning (DRL). The method focuses on the transmission of scalable VR videos in 6G networks, utilizing DRL to achieve a cloud-edge-end collaborative transmission framework, where Tile-based scalable VR video is proactively cached to the MEC nodes, and Asynchronous Advantage Actor-Critic (A3C) algorithm is adopted to jointly optimize dual-connected link resources, edge computing resources, and user terminal computing resources. Through simulation experiments, the effectiveness of the proposed algorithms was validated. The results show that compared to baseline algorithms and state-of-the-art methods, the proposed A3C algorithm effectively improves the average quality of experience (QoE) for VR users and maintains low latency under various sub-6G and millimeter wave link capacities. Furthermore, with increased Mobile Edge Computing (MEC) computing power and User Equipment (UE) computing capabilities, the proposed method can further improve QoE and reduce latency.</div></div>\",\"PeriodicalId\":50027,\"journal\":{\"name\":\"Journal of Systems Architecture\",\"volume\":\"162 \",\"pages\":\"Article 103401\"},\"PeriodicalIF\":3.7000,\"publicationDate\":\"2025-03-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Systems Architecture\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1383762125000736\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Systems Architecture","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1383762125000736","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
Joint optimization of layering and power allocation for scalable VR video in 6G networks based on Deep Reinforcement Learning
With the advancement and application of virtual reality (VR) technology, there is a growing demand for network bandwidth and computational capabilities. To address the challenges of high bandwidth requirements, low latency demands, and intensive computational tasks in VR video transmission, this paper proposes a joint optimization method for layering and power allocation based on Deep Reinforcement Learning (DRL). The method focuses on the transmission of scalable VR videos in 6G networks, utilizing DRL to achieve a cloud-edge-end collaborative transmission framework, where Tile-based scalable VR video is proactively cached to the MEC nodes, and Asynchronous Advantage Actor-Critic (A3C) algorithm is adopted to jointly optimize dual-connected link resources, edge computing resources, and user terminal computing resources. Through simulation experiments, the effectiveness of the proposed algorithms was validated. The results show that compared to baseline algorithms and state-of-the-art methods, the proposed A3C algorithm effectively improves the average quality of experience (QoE) for VR users and maintains low latency under various sub-6G and millimeter wave link capacities. Furthermore, with increased Mobile Edge Computing (MEC) computing power and User Equipment (UE) computing capabilities, the proposed method can further improve QoE and reduce latency.
期刊介绍:
The Journal of Systems Architecture: Embedded Software Design (JSA) is a journal covering all design and architectural aspects related to embedded systems and software. It ranges from the microarchitecture level via the system software level up to the application-specific architecture level. Aspects such as real-time systems, operating systems, FPGA programming, programming languages, communications (limited to analysis and the software stack), mobile systems, parallel and distributed architectures as well as additional subjects in the computer and system architecture area will fall within the scope of this journal. Technology will not be a main focus, but its use and relevance to particular designs will be. Case studies are welcome but must contribute more than just a design for a particular piece of software.
Design automation of such systems including methodologies, techniques and tools for their design as well as novel designs of software components fall within the scope of this journal. Novel applications that use embedded systems are also central in this journal. While hardware is not a part of this journal hardware/software co-design methods that consider interplay between software and hardware components with and emphasis on software are also relevant here.