{"title":"地球轨道航天器的自主机载规划","authors":"Adam Herrmann, H. Schaub","doi":"10.1109/AERO53065.2022.9843331","DOIUrl":null,"url":null,"abstract":"This work explores on-board planning and scheduling for the multi-target, single spacecraft Earth-observing satellite (EOS) scheduling problem. The problem is formulated as a Markov decision process (MDP) where the number of targets included in the state and action space is an adjustable parameter that may account for clusters of targets with varying priorities. As targets are passed or imaged, they are replaced in the state and action space with the next set of upcoming targets. Unlike prior EOS problem formulations, this work explores how the size of the state and action space can be reduced to produce optimal, generalized policies that may be executed on board the spacecraft in a fraction of a second. Performance of the agents is shown to increase with the number of targets in the state and action space. The number of imaged and downlinked targets stays relatively constant, but the reward increases significantly, demonstrating that the agents are prioritizing high priority targets over low priority targets.","PeriodicalId":219988,"journal":{"name":"2022 IEEE Aerospace Conference (AERO)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Autonomous On-board Planning for Earth-Orbiting Spacecraft\",\"authors\":\"Adam Herrmann, H. Schaub\",\"doi\":\"10.1109/AERO53065.2022.9843331\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This work explores on-board planning and scheduling for the multi-target, single spacecraft Earth-observing satellite (EOS) scheduling problem. The problem is formulated as a Markov decision process (MDP) where the number of targets included in the state and action space is an adjustable parameter that may account for clusters of targets with varying priorities. As targets are passed or imaged, they are replaced in the state and action space with the next set of upcoming targets. Unlike prior EOS problem formulations, this work explores how the size of the state and action space can be reduced to produce optimal, generalized policies that may be executed on board the spacecraft in a fraction of a second. Performance of the agents is shown to increase with the number of targets in the state and action space. The number of imaged and downlinked targets stays relatively constant, but the reward increases significantly, demonstrating that the agents are prioritizing high priority targets over low priority targets.\",\"PeriodicalId\":219988,\"journal\":{\"name\":\"2022 IEEE Aerospace Conference (AERO)\",\"volume\":\"58 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-03-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE Aerospace Conference (AERO)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/AERO53065.2022.9843331\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE Aerospace Conference (AERO)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AERO53065.2022.9843331","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Autonomous On-board Planning for Earth-Orbiting Spacecraft
This work explores on-board planning and scheduling for the multi-target, single spacecraft Earth-observing satellite (EOS) scheduling problem. The problem is formulated as a Markov decision process (MDP) where the number of targets included in the state and action space is an adjustable parameter that may account for clusters of targets with varying priorities. As targets are passed or imaged, they are replaced in the state and action space with the next set of upcoming targets. Unlike prior EOS problem formulations, this work explores how the size of the state and action space can be reduced to produce optimal, generalized policies that may be executed on board the spacecraft in a fraction of a second. Performance of the agents is shown to increase with the number of targets in the state and action space. The number of imaged and downlinked targets stays relatively constant, but the reward increases significantly, demonstrating that the agents are prioritizing high priority targets over low priority targets.