{"title":"基于深度强化学习控制的空间机械臂避碰","authors":"James Blaise, Michael C. F. Bazzocchi","doi":"10.3390/aerospace10090778","DOIUrl":null,"url":null,"abstract":"Recent efforts in on-orbit servicing, manufacturing, and debris removal have accentuated some of the challenges related to close-proximity space manipulation. Orbital debris threatens future space endeavors driving active removal missions. Additionally, refueling missions have become increasingly viable to prolong satellite life and mitigate future debris generation. The ability to capture cooperative and non-cooperative spacecraft is an essential step for refueling or removal missions. In close-proximity capture, collision avoidance remains a challenge during trajectory planning for space manipulators. In this research, a deep reinforcement learning control approach is applied to a three-degrees-of-freedom manipulator to capture space objects and avoid collisions. This approach is investigated in both free-flying and free-floating scenarios, where the target object is either cooperative or non-cooperative. A deep reinforcement learning controller is trained for each scenario to effectively reach a target capture location on a simulated spacecraft model while avoiding collisions. Collisions between the base spacecraft and the target spacecraft are avoided in the planned manipulator trajectories. The trained model is tested for each scenario and the results for the manipulator and base motion are detailed and discussed.","PeriodicalId":50845,"journal":{"name":"Aerospace America","volume":"9 6 1","pages":""},"PeriodicalIF":0.1000,"publicationDate":"2023-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Space Manipulator Collision Avoidance Using a Deep Reinforcement Learning Control\",\"authors\":\"James Blaise, Michael C. F. Bazzocchi\",\"doi\":\"10.3390/aerospace10090778\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Recent efforts in on-orbit servicing, manufacturing, and debris removal have accentuated some of the challenges related to close-proximity space manipulation. Orbital debris threatens future space endeavors driving active removal missions. Additionally, refueling missions have become increasingly viable to prolong satellite life and mitigate future debris generation. The ability to capture cooperative and non-cooperative spacecraft is an essential step for refueling or removal missions. In close-proximity capture, collision avoidance remains a challenge during trajectory planning for space manipulators. In this research, a deep reinforcement learning control approach is applied to a three-degrees-of-freedom manipulator to capture space objects and avoid collisions. This approach is investigated in both free-flying and free-floating scenarios, where the target object is either cooperative or non-cooperative. A deep reinforcement learning controller is trained for each scenario to effectively reach a target capture location on a simulated spacecraft model while avoiding collisions. Collisions between the base spacecraft and the target spacecraft are avoided in the planned manipulator trajectories. The trained model is tested for each scenario and the results for the manipulator and base motion are detailed and discussed.\",\"PeriodicalId\":50845,\"journal\":{\"name\":\"Aerospace America\",\"volume\":\"9 6 1\",\"pages\":\"\"},\"PeriodicalIF\":0.1000,\"publicationDate\":\"2023-08-31\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Aerospace America\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://doi.org/10.3390/aerospace10090778\",\"RegionNum\":4,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"ENGINEERING, AEROSPACE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Aerospace America","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.3390/aerospace10090778","RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"ENGINEERING, AEROSPACE","Score":null,"Total":0}
Space Manipulator Collision Avoidance Using a Deep Reinforcement Learning Control
Recent efforts in on-orbit servicing, manufacturing, and debris removal have accentuated some of the challenges related to close-proximity space manipulation. Orbital debris threatens future space endeavors driving active removal missions. Additionally, refueling missions have become increasingly viable to prolong satellite life and mitigate future debris generation. The ability to capture cooperative and non-cooperative spacecraft is an essential step for refueling or removal missions. In close-proximity capture, collision avoidance remains a challenge during trajectory planning for space manipulators. In this research, a deep reinforcement learning control approach is applied to a three-degrees-of-freedom manipulator to capture space objects and avoid collisions. This approach is investigated in both free-flying and free-floating scenarios, where the target object is either cooperative or non-cooperative. A deep reinforcement learning controller is trained for each scenario to effectively reach a target capture location on a simulated spacecraft model while avoiding collisions. Collisions between the base spacecraft and the target spacecraft are avoided in the planned manipulator trajectories. The trained model is tested for each scenario and the results for the manipulator and base motion are detailed and discussed.