{"title":"基于深度分布强化学习的自适应路由与保证延迟边界","authors":"Jianmin Liu;Dan Li;Yongjun Xu","doi":"10.1109/TNET.2024.3425652","DOIUrl":null,"url":null,"abstract":"Real-time applications that require timely data delivery over wireless multi-hop networks within specified deadlines are growing increasingly. Effective routing protocols that can guarantee real-time QoS are crucial, yet challenging, due to the unpredictable variations in end-to-end delay caused by unreliable wireless channels. In such conditions, the upper bound on the end-to-end delay, i.e., worst-case end-to-end delay, should be guaranteed within the deadline. However, existing routing protocols with guaranteed delay bounds cannot strictly guarantee real-time QoS because they assume that the worst-case end-to-end delay is known and ignore the impact of routing policies on the worst-case end-to-end delay determination. In this paper, we relax this assumption and propose DDRL-ARGB, an Adaptive Routing with Guaranteed delay Bounds using Deep Distributional Reinforcement Learning (DDRL). DDRL-ARGB adopts DDRL to jointly determine the worst-case end-to-end delay and learn routing policies. To accurately determine worst-case end-to-end delay, DDRL-ARGB employs a quantile regression deep Q-network to learn the end-to-end delay cumulative distribution. To guarantee real-time QoS, DDRL-ARGB optimizes routing decisions under the constraint of worst-case end-to-end delay within the deadline. To improve traffic congestion, DDRL-ARGB considers the network congestion status when making routing decisions. Extensive results show that DDRL-ARGB can accurately calculate worst-case end-to-end delay, and can strictly guarantee real-time QoS under a small tolerant violation probability against two state-of-the-art routing protocols.","PeriodicalId":13443,"journal":{"name":"IEEE/ACM Transactions on Networking","volume":"32 6","pages":"4692-4706"},"PeriodicalIF":3.0000,"publicationDate":"2024-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Deep Distributional Reinforcement Learning-Based Adaptive Routing With Guaranteed Delay Bounds\",\"authors\":\"Jianmin Liu;Dan Li;Yongjun Xu\",\"doi\":\"10.1109/TNET.2024.3425652\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Real-time applications that require timely data delivery over wireless multi-hop networks within specified deadlines are growing increasingly. Effective routing protocols that can guarantee real-time QoS are crucial, yet challenging, due to the unpredictable variations in end-to-end delay caused by unreliable wireless channels. In such conditions, the upper bound on the end-to-end delay, i.e., worst-case end-to-end delay, should be guaranteed within the deadline. However, existing routing protocols with guaranteed delay bounds cannot strictly guarantee real-time QoS because they assume that the worst-case end-to-end delay is known and ignore the impact of routing policies on the worst-case end-to-end delay determination. In this paper, we relax this assumption and propose DDRL-ARGB, an Adaptive Routing with Guaranteed delay Bounds using Deep Distributional Reinforcement Learning (DDRL). DDRL-ARGB adopts DDRL to jointly determine the worst-case end-to-end delay and learn routing policies. To accurately determine worst-case end-to-end delay, DDRL-ARGB employs a quantile regression deep Q-network to learn the end-to-end delay cumulative distribution. To guarantee real-time QoS, DDRL-ARGB optimizes routing decisions under the constraint of worst-case end-to-end delay within the deadline. To improve traffic congestion, DDRL-ARGB considers the network congestion status when making routing decisions. Extensive results show that DDRL-ARGB can accurately calculate worst-case end-to-end delay, and can strictly guarantee real-time QoS under a small tolerant violation probability against two state-of-the-art routing protocols.\",\"PeriodicalId\":13443,\"journal\":{\"name\":\"IEEE/ACM Transactions on Networking\",\"volume\":\"32 6\",\"pages\":\"4692-4706\"},\"PeriodicalIF\":3.0000,\"publicationDate\":\"2024-07-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE/ACM Transactions on Networking\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10598827/\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE/ACM Transactions on Networking","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10598827/","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
Deep Distributional Reinforcement Learning-Based Adaptive Routing With Guaranteed Delay Bounds
Real-time applications that require timely data delivery over wireless multi-hop networks within specified deadlines are growing increasingly. Effective routing protocols that can guarantee real-time QoS are crucial, yet challenging, due to the unpredictable variations in end-to-end delay caused by unreliable wireless channels. In such conditions, the upper bound on the end-to-end delay, i.e., worst-case end-to-end delay, should be guaranteed within the deadline. However, existing routing protocols with guaranteed delay bounds cannot strictly guarantee real-time QoS because they assume that the worst-case end-to-end delay is known and ignore the impact of routing policies on the worst-case end-to-end delay determination. In this paper, we relax this assumption and propose DDRL-ARGB, an Adaptive Routing with Guaranteed delay Bounds using Deep Distributional Reinforcement Learning (DDRL). DDRL-ARGB adopts DDRL to jointly determine the worst-case end-to-end delay and learn routing policies. To accurately determine worst-case end-to-end delay, DDRL-ARGB employs a quantile regression deep Q-network to learn the end-to-end delay cumulative distribution. To guarantee real-time QoS, DDRL-ARGB optimizes routing decisions under the constraint of worst-case end-to-end delay within the deadline. To improve traffic congestion, DDRL-ARGB considers the network congestion status when making routing decisions. Extensive results show that DDRL-ARGB can accurately calculate worst-case end-to-end delay, and can strictly guarantee real-time QoS under a small tolerant violation probability against two state-of-the-art routing protocols.
期刊介绍:
The IEEE/ACM Transactions on Networking’s high-level objective is to publish high-quality, original research results derived from theoretical or experimental exploration of the area of communication/computer networking, covering all sorts of information transport networks over all sorts of physical layer technologies, both wireline (all kinds of guided media: e.g., copper, optical) and wireless (e.g., radio-frequency, acoustic (e.g., underwater), infra-red), or hybrids of these. The journal welcomes applied contributions reporting on novel experiences and experiments with actual systems.