Ming Chen , Luona Wei , Jie Chun , Lei He , Shang Xiang , Lining Xing , Yingwu Chen
{"title":"A neural priority model for agile earth observation satellite scheduling using deep reinforcement learning","authors":"Ming Chen , Luona Wei , Jie Chun , Lei He , Shang Xiang , Lining Xing , Yingwu Chen","doi":"10.1016/j.asoc.2025.112984","DOIUrl":null,"url":null,"abstract":"<div><div>The agile earth observation satellite scheduling problem (AEOSSP) is a time-dependent and complex combinatorial optimization challenge that has spurred extensive research for decades. Traditional methods have primarily relied on iterative searching processes to approximate near-optimal solutions, but their efficiency remains limited. To address this issue, we propose a Priority Construction Model (PCM) based on deep reinforcement learning (DRL), forming a learning-based, two-stage construction heuristic. The PCM integrates a Priority Construction Neural Network (PCNN) alongside a Backward-Slacken and Top-Insert (BS-TI) scheduling algorithm. In PCM, the PCNN sequences observation requests, while the BS-TI schedules each sequenced request in accordance with specific constraints, thus freeing the neural policy from the burden of complex constraint checking. Experimental results indicate that following a policy-gradient-based DRL training process, PCM outperforms the state-of-the-art AEOSSP iterative algorithm, achieving better average profits within an exceptionally short construction time in most scenarios. The model study further reveals that PCNN outperforms other DRL policies in terms of priority policy representation, while the PCM exhibits superior generalization capabilities across varying scales and distributions. Therefore, our proposed model presents a valuable reference solution that not only meets the large-scale and rapid response requirements of the AEOSSP but also holds potential for application in upcoming large constellations and emerging management paradigms. More importantly, we introduce a novel framework that separates the DRL optimization process from constraint management, lowering the entry barrier for applying DRL to complex problems. This makes the model adaptable to various optimization challenges in engineering and operations research, thus extending its applicability beyond the AEOSSP domain.</div></div>","PeriodicalId":50737,"journal":{"name":"Applied Soft Computing","volume":"174 ","pages":"Article 112984"},"PeriodicalIF":7.2000,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied Soft Computing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1568494625002959","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
The agile earth observation satellite scheduling problem (AEOSSP) is a time-dependent and complex combinatorial optimization challenge that has spurred extensive research for decades. Traditional methods have primarily relied on iterative searching processes to approximate near-optimal solutions, but their efficiency remains limited. To address this issue, we propose a Priority Construction Model (PCM) based on deep reinforcement learning (DRL), forming a learning-based, two-stage construction heuristic. The PCM integrates a Priority Construction Neural Network (PCNN) alongside a Backward-Slacken and Top-Insert (BS-TI) scheduling algorithm. In PCM, the PCNN sequences observation requests, while the BS-TI schedules each sequenced request in accordance with specific constraints, thus freeing the neural policy from the burden of complex constraint checking. Experimental results indicate that following a policy-gradient-based DRL training process, PCM outperforms the state-of-the-art AEOSSP iterative algorithm, achieving better average profits within an exceptionally short construction time in most scenarios. The model study further reveals that PCNN outperforms other DRL policies in terms of priority policy representation, while the PCM exhibits superior generalization capabilities across varying scales and distributions. Therefore, our proposed model presents a valuable reference solution that not only meets the large-scale and rapid response requirements of the AEOSSP but also holds potential for application in upcoming large constellations and emerging management paradigms. More importantly, we introduce a novel framework that separates the DRL optimization process from constraint management, lowering the entry barrier for applying DRL to complex problems. This makes the model adaptable to various optimization challenges in engineering and operations research, thus extending its applicability beyond the AEOSSP domain.
期刊介绍:
Applied Soft Computing is an international journal promoting an integrated view of soft computing to solve real life problems.The focus is to publish the highest quality research in application and convergence of the areas of Fuzzy Logic, Neural Networks, Evolutionary Computing, Rough Sets and other similar techniques to address real world complexities.
Applied Soft Computing is a rolling publication: articles are published as soon as the editor-in-chief has accepted them. Therefore, the web site will continuously be updated with new articles and the publication time will be short.