时间敏感网络的强化学习辅助路由

Nurefsan Sertbas Bülbül, Mathias Fischer
{"title":"时间敏感网络的强化学习辅助路由","authors":"Nurefsan Sertbas Bülbül, Mathias Fischer","doi":"10.1109/GLOBECOM48099.2022.10001630","DOIUrl":null,"url":null,"abstract":"Recent developments in real-time critical systems pave the way for different application scenarios such as Industrial IoT with various quality-of-service (QoS) requirements. The most critical common feature of such applications is that they are sensitive to latency and jitter. Thus, it is desired to perform flow placements strategically considering application requirements due to limited resource availability. In this paper, path computation for time-sensitive networks is investigated while satisfying individual end-to-end delay requirements of critical traffic. The problem is formulated as a mixed-integer linear program (MILP) which is NP-hard with exponentially increasing computational complexity as the network size expands. To solve the MILP with high efficiency, we propose a reinforcement learning (RL) algorithm that learns the best routing policy by continuously interacting with the network environment. The proposed learning algorithm determines the variable action set at each decision-making state and captures different execution times of the actions. The reward function in the proposed algorithm is carefully designed for meeting individual flow deadlines. Simulation results indicate that the proposed reinforcement learning algorithm can produce near-optimal flow allocations (close by ~1.5 %) and scales well even with large topology sizes.","PeriodicalId":313199,"journal":{"name":"GLOBECOM 2022 - 2022 IEEE Global Communications Conference","volume":"160 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Reinforcement Learning assisted Routing for Time Sensitive Networks\",\"authors\":\"Nurefsan Sertbas Bülbül, Mathias Fischer\",\"doi\":\"10.1109/GLOBECOM48099.2022.10001630\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Recent developments in real-time critical systems pave the way for different application scenarios such as Industrial IoT with various quality-of-service (QoS) requirements. The most critical common feature of such applications is that they are sensitive to latency and jitter. Thus, it is desired to perform flow placements strategically considering application requirements due to limited resource availability. In this paper, path computation for time-sensitive networks is investigated while satisfying individual end-to-end delay requirements of critical traffic. The problem is formulated as a mixed-integer linear program (MILP) which is NP-hard with exponentially increasing computational complexity as the network size expands. To solve the MILP with high efficiency, we propose a reinforcement learning (RL) algorithm that learns the best routing policy by continuously interacting with the network environment. The proposed learning algorithm determines the variable action set at each decision-making state and captures different execution times of the actions. The reward function in the proposed algorithm is carefully designed for meeting individual flow deadlines. Simulation results indicate that the proposed reinforcement learning algorithm can produce near-optimal flow allocations (close by ~1.5 %) and scales well even with large topology sizes.\",\"PeriodicalId\":313199,\"journal\":{\"name\":\"GLOBECOM 2022 - 2022 IEEE Global Communications Conference\",\"volume\":\"160 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-12-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"GLOBECOM 2022 - 2022 IEEE Global Communications Conference\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/GLOBECOM48099.2022.10001630\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"GLOBECOM 2022 - 2022 IEEE Global Communications Conference","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/GLOBECOM48099.2022.10001630","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

摘要

实时关键系统的最新发展为不同的应用场景铺平了道路,例如具有各种服务质量(QoS)要求的工业物联网。这类应用程序最重要的共同特征是它们对延迟和抖动很敏感。因此,由于有限的资源可用性,需要策略性地考虑应用程序需求来执行流放置。本文研究了时间敏感网络在满足各个关键业务端到端时延要求的情况下的路径计算问题。该问题被表述为一个混合整数线性规划(MILP),它是np困难的,随着网络规模的扩大,计算复杂度呈指数级增长。为了高效地解决MILP问题,我们提出了一种强化学习(RL)算法,该算法通过不断与网络环境交互来学习最佳路由策略。提出的学习算法确定每个决策状态下的可变动作集,并捕获动作的不同执行时间。该算法中的奖励函数经过精心设计,以满足各个流的最后期限。仿真结果表明,所提出的强化学习算法可以产生接近最优的流量分配(接近~ 1.5%),并且即使在大拓扑尺寸下也能很好地扩展。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Reinforcement Learning assisted Routing for Time Sensitive Networks
Recent developments in real-time critical systems pave the way for different application scenarios such as Industrial IoT with various quality-of-service (QoS) requirements. The most critical common feature of such applications is that they are sensitive to latency and jitter. Thus, it is desired to perform flow placements strategically considering application requirements due to limited resource availability. In this paper, path computation for time-sensitive networks is investigated while satisfying individual end-to-end delay requirements of critical traffic. The problem is formulated as a mixed-integer linear program (MILP) which is NP-hard with exponentially increasing computational complexity as the network size expands. To solve the MILP with high efficiency, we propose a reinforcement learning (RL) algorithm that learns the best routing policy by continuously interacting with the network environment. The proposed learning algorithm determines the variable action set at each decision-making state and captures different execution times of the actions. The reward function in the proposed algorithm is carefully designed for meeting individual flow deadlines. Simulation results indicate that the proposed reinforcement learning algorithm can produce near-optimal flow allocations (close by ~1.5 %) and scales well even with large topology sizes.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信