Joshua Aurand, Steven C. Cutlip, Henry Lei, Kendra A. Lang, Sean Phillips
{"title":"用于对翻滚目标进行分散式多代理检测的深度 Q 学习","authors":"Joshua Aurand, Steven C. Cutlip, Henry Lei, Kendra A. Lang, Sean Phillips","doi":"10.2514/1.a35749","DOIUrl":null,"url":null,"abstract":"As the number of on-orbit satellites increases, the ability to repair or de-orbit them is becoming increasingly important. The implicitly required task of on-orbit inspection is challenging due to coordination of multiple observer satellites, a highly nonlinear environment, a potentially unknown or unpredictable target, and time delays associated with ground-based control. There is a critical need for autonomous, robust, decentralized solutions. To achieve this, we consider a hierarchical, learned approach for the decentralized planning of multi-agent inspection of a tumbling target. Our solution consists of two components: a viewpoint or high-level planner trained using deep reinforcement learning, and a low-level planner that will handle the point-to-point maneuvering of the spacecraft. Operating under limited information, our trained multi-agent high-level policies successfully contextualize information within the global hierarchical environment and are correspondingly able to inspect over 90% of nonconvex tumbling targets, even in the absence of additional agent attitude control.","PeriodicalId":508266,"journal":{"name":"Journal of Spacecraft and Rockets","volume":"101 3","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Deep Q-Learning for Decentralized Multi-Agent Inspection of a Tumbling Target\",\"authors\":\"Joshua Aurand, Steven C. Cutlip, Henry Lei, Kendra A. Lang, Sean Phillips\",\"doi\":\"10.2514/1.a35749\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"As the number of on-orbit satellites increases, the ability to repair or de-orbit them is becoming increasingly important. The implicitly required task of on-orbit inspection is challenging due to coordination of multiple observer satellites, a highly nonlinear environment, a potentially unknown or unpredictable target, and time delays associated with ground-based control. There is a critical need for autonomous, robust, decentralized solutions. To achieve this, we consider a hierarchical, learned approach for the decentralized planning of multi-agent inspection of a tumbling target. Our solution consists of two components: a viewpoint or high-level planner trained using deep reinforcement learning, and a low-level planner that will handle the point-to-point maneuvering of the spacecraft. Operating under limited information, our trained multi-agent high-level policies successfully contextualize information within the global hierarchical environment and are correspondingly able to inspect over 90% of nonconvex tumbling targets, even in the absence of additional agent attitude control.\",\"PeriodicalId\":508266,\"journal\":{\"name\":\"Journal of Spacecraft and Rockets\",\"volume\":\"101 3\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-02-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Spacecraft and Rockets\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.2514/1.a35749\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Spacecraft and Rockets","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2514/1.a35749","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Deep Q-Learning for Decentralized Multi-Agent Inspection of a Tumbling Target
As the number of on-orbit satellites increases, the ability to repair or de-orbit them is becoming increasingly important. The implicitly required task of on-orbit inspection is challenging due to coordination of multiple observer satellites, a highly nonlinear environment, a potentially unknown or unpredictable target, and time delays associated with ground-based control. There is a critical need for autonomous, robust, decentralized solutions. To achieve this, we consider a hierarchical, learned approach for the decentralized planning of multi-agent inspection of a tumbling target. Our solution consists of two components: a viewpoint or high-level planner trained using deep reinforcement learning, and a low-level planner that will handle the point-to-point maneuvering of the spacecraft. Operating under limited information, our trained multi-agent high-level policies successfully contextualize information within the global hierarchical environment and are correspondingly able to inspect over 90% of nonconvex tumbling targets, even in the absence of additional agent attitude control.