{"title":"利用强化学习自主管理延迟容忍网络节点中的缓冲区","authors":"Elizabeth Harkavy, M. S. Net","doi":"10.1109/AERO47225.2020.9172453","DOIUrl":null,"url":null,"abstract":"In order to effectively communicate with Earth from deep space there is a need for network automation similar to that of the Internet. The existing automated network protocols, such as TCP and IP, cannot work in deep space due to the assumptions under which they were designed. Specifically, protocols assume the existence of an end-to-end path between the source and destination for the entirety of a communication session and the path being traversable in a negligible amount of time. In contrast, a Delay Tolerant Network is a set of protocols that allows networking in environments where links suffer from high-delay or disruptions (e.g. Deep Space). These protocols rely on different assumptions such as time synchronization and suitable memory allocation. In this paper, we consider the problem of autonomously avoiding memory overflows in a Delay Tolerant Node. To that end, we propose using Reinforcement Learning to automate buffer management given that we can easily measure the relative rates of data coming in and out of the DTN node. In the case of detecting overflow, we let the autonomous agent choose between three actions: slowing down the client, requesting more resources from the Deep Space Network, or selectively dropping packets once the buffer nears capacity. Furthermore, we show that all of these actions can be realistically implemented in real-life operations given current and planned capabilities of Delay Tolerant Networking and the Deep Space Network. Similarly, we also show that using Reinforcement Learning for this problem is well suited to this application due to the number of possible states and variables, as well as the fact that large distances between deep space spacecraft and Earth prevent human-in-the-loop intervention.","PeriodicalId":114560,"journal":{"name":"2020 IEEE Aerospace Conference","volume":"129 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":"{\"title\":\"Utilizing Reinforcement Learning to Autonomously Mange Buffers in a Delay Tolerant Network Node\",\"authors\":\"Elizabeth Harkavy, M. S. Net\",\"doi\":\"10.1109/AERO47225.2020.9172453\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In order to effectively communicate with Earth from deep space there is a need for network automation similar to that of the Internet. The existing automated network protocols, such as TCP and IP, cannot work in deep space due to the assumptions under which they were designed. Specifically, protocols assume the existence of an end-to-end path between the source and destination for the entirety of a communication session and the path being traversable in a negligible amount of time. In contrast, a Delay Tolerant Network is a set of protocols that allows networking in environments where links suffer from high-delay or disruptions (e.g. Deep Space). These protocols rely on different assumptions such as time synchronization and suitable memory allocation. In this paper, we consider the problem of autonomously avoiding memory overflows in a Delay Tolerant Node. To that end, we propose using Reinforcement Learning to automate buffer management given that we can easily measure the relative rates of data coming in and out of the DTN node. In the case of detecting overflow, we let the autonomous agent choose between three actions: slowing down the client, requesting more resources from the Deep Space Network, or selectively dropping packets once the buffer nears capacity. Furthermore, we show that all of these actions can be realistically implemented in real-life operations given current and planned capabilities of Delay Tolerant Networking and the Deep Space Network. Similarly, we also show that using Reinforcement Learning for this problem is well suited to this application due to the number of possible states and variables, as well as the fact that large distances between deep space spacecraft and Earth prevent human-in-the-loop intervention.\",\"PeriodicalId\":114560,\"journal\":{\"name\":\"2020 IEEE Aerospace Conference\",\"volume\":\"129 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-03-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"8\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 IEEE Aerospace Conference\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/AERO47225.2020.9172453\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE Aerospace Conference","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AERO47225.2020.9172453","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Utilizing Reinforcement Learning to Autonomously Mange Buffers in a Delay Tolerant Network Node
In order to effectively communicate with Earth from deep space there is a need for network automation similar to that of the Internet. The existing automated network protocols, such as TCP and IP, cannot work in deep space due to the assumptions under which they were designed. Specifically, protocols assume the existence of an end-to-end path between the source and destination for the entirety of a communication session and the path being traversable in a negligible amount of time. In contrast, a Delay Tolerant Network is a set of protocols that allows networking in environments where links suffer from high-delay or disruptions (e.g. Deep Space). These protocols rely on different assumptions such as time synchronization and suitable memory allocation. In this paper, we consider the problem of autonomously avoiding memory overflows in a Delay Tolerant Node. To that end, we propose using Reinforcement Learning to automate buffer management given that we can easily measure the relative rates of data coming in and out of the DTN node. In the case of detecting overflow, we let the autonomous agent choose between three actions: slowing down the client, requesting more resources from the Deep Space Network, or selectively dropping packets once the buffer nears capacity. Furthermore, we show that all of these actions can be realistically implemented in real-life operations given current and planned capabilities of Delay Tolerant Networking and the Deep Space Network. Similarly, we also show that using Reinforcement Learning for this problem is well suited to this application due to the number of possible states and variables, as well as the fact that large distances between deep space spacecraft and Earth prevent human-in-the-loop intervention.