{"title":"基于正则化非策略强化学习的无线网络快速链路调度","authors":"Sagnik Bhattacharya;Ayan Banerjee;Subrahmanya Swamy Peruru;Kothapalli Venkata Srinivas","doi":"10.1109/LNET.2023.3264486","DOIUrl":null,"url":null,"abstract":"The centralized-link-scheduling problem in a wireless network graph involves solving the maximum-weighted-independent-set (MWIS) problem on the conflict graph. In this letter, we propose a novel regularized off-policy reinforcement learning-based MWIS solver and use for the scheduling problem. The proposed MWIS algorithm achieves 17% improvement over state-of-the-art heuristic solver KaMIS, 60% over greedy solver, 16% and 17% over RL-based solvers LwD and S2V-DQN, respectively. We show that our scheduler achieves stable throughput values 14% and 22% higher than LwD and a distributed greedy scheduler, respectively. We demonstrate the flexibility of our RL algorithm by modifying it to create a time-since-last-service-aware scheduler.","PeriodicalId":100628,"journal":{"name":"IEEE Networking Letters","volume":"5 2","pages":"86-90"},"PeriodicalIF":0.0000,"publicationDate":"2023-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Fast Link Scheduling in Wireless Networks Using Regularized Off-Policy Reinforcement Learning\",\"authors\":\"Sagnik Bhattacharya;Ayan Banerjee;Subrahmanya Swamy Peruru;Kothapalli Venkata Srinivas\",\"doi\":\"10.1109/LNET.2023.3264486\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The centralized-link-scheduling problem in a wireless network graph involves solving the maximum-weighted-independent-set (MWIS) problem on the conflict graph. In this letter, we propose a novel regularized off-policy reinforcement learning-based MWIS solver and use for the scheduling problem. The proposed MWIS algorithm achieves 17% improvement over state-of-the-art heuristic solver KaMIS, 60% over greedy solver, 16% and 17% over RL-based solvers LwD and S2V-DQN, respectively. We show that our scheduler achieves stable throughput values 14% and 22% higher than LwD and a distributed greedy scheduler, respectively. We demonstrate the flexibility of our RL algorithm by modifying it to create a time-since-last-service-aware scheduler.\",\"PeriodicalId\":100628,\"journal\":{\"name\":\"IEEE Networking Letters\",\"volume\":\"5 2\",\"pages\":\"86-90\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-04-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Networking Letters\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10092874/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Networking Letters","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10092874/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Fast Link Scheduling in Wireless Networks Using Regularized Off-Policy Reinforcement Learning
The centralized-link-scheduling problem in a wireless network graph involves solving the maximum-weighted-independent-set (MWIS) problem on the conflict graph. In this letter, we propose a novel regularized off-policy reinforcement learning-based MWIS solver and use for the scheduling problem. The proposed MWIS algorithm achieves 17% improvement over state-of-the-art heuristic solver KaMIS, 60% over greedy solver, 16% and 17% over RL-based solvers LwD and S2V-DQN, respectively. We show that our scheduler achieves stable throughput values 14% and 22% higher than LwD and a distributed greedy scheduler, respectively. We demonstrate the flexibility of our RL algorithm by modifying it to create a time-since-last-service-aware scheduler.