{"title":"运动空间目标检测的多帧时空注意运动自适应网络","authors":"Yuxi Guo , Junzhe Cao , Bindang Xue","doi":"10.1016/j.asr.2025.08.024","DOIUrl":null,"url":null,"abstract":"<div><div>Space target detection based on optical observations is a fundamental approach for space situational awareness. However, in ground-based optical images, space targets often resemble stars, making single-frame differentiation challenging. The dense stellar background with high dynamic range further complicates object extraction. In real-world scenarios, varying relative motion between objects and the observation platform induces diverse apparent speeds, scales, and shapes, posing challenges for existing methods to exploit multiframe motion cues effectively, thereby limiting detection performance. To address these challenges, we propose the Multiframe Spatio-temporal Attention Motion-adaptive Network (MSAMNet) for robust small moving target detection in complex backgrounds. MSAMNet integrates Adaptive Attention Feature Enhancement (AAFE) and Spatio-Temporal Dynamic Motion-Aware (STDMA) modules to enhance spatio-temporal feature representation. AAFE generates attention maps based on channel-wise differences and local feature distributions to suppress noise and highlight target details. STDMA captures motion features across consecutive frames through exponential decay weighting and multi-scale heterogeneous kernel convolution, improving sensitivity to diverse motion patterns. Furthermore, we introduce BUAA-MSOD, the first open-source dataset of multiple moving space objects in real-world scenarios, covering diverse motion patterns and target morphologies with both mask and point-level annotations. Experimental results demonstrate that MSAMNet significantly outperforms state-of-the-art methods on BUAA-MSOD, achieving higher detection accuracy and lower false alarm rates across various space targets.</div></div>","PeriodicalId":50850,"journal":{"name":"Advances in Space Research","volume":"76 9","pages":"Pages 5383-5405"},"PeriodicalIF":2.8000,"publicationDate":"2025-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Multiframe spatio-temporal attention motion-adaptive network for moving space target detection\",\"authors\":\"Yuxi Guo , Junzhe Cao , Bindang Xue\",\"doi\":\"10.1016/j.asr.2025.08.024\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Space target detection based on optical observations is a fundamental approach for space situational awareness. However, in ground-based optical images, space targets often resemble stars, making single-frame differentiation challenging. The dense stellar background with high dynamic range further complicates object extraction. In real-world scenarios, varying relative motion between objects and the observation platform induces diverse apparent speeds, scales, and shapes, posing challenges for existing methods to exploit multiframe motion cues effectively, thereby limiting detection performance. To address these challenges, we propose the Multiframe Spatio-temporal Attention Motion-adaptive Network (MSAMNet) for robust small moving target detection in complex backgrounds. MSAMNet integrates Adaptive Attention Feature Enhancement (AAFE) and Spatio-Temporal Dynamic Motion-Aware (STDMA) modules to enhance spatio-temporal feature representation. AAFE generates attention maps based on channel-wise differences and local feature distributions to suppress noise and highlight target details. STDMA captures motion features across consecutive frames through exponential decay weighting and multi-scale heterogeneous kernel convolution, improving sensitivity to diverse motion patterns. Furthermore, we introduce BUAA-MSOD, the first open-source dataset of multiple moving space objects in real-world scenarios, covering diverse motion patterns and target morphologies with both mask and point-level annotations. Experimental results demonstrate that MSAMNet significantly outperforms state-of-the-art methods on BUAA-MSOD, achieving higher detection accuracy and lower false alarm rates across various space targets.</div></div>\",\"PeriodicalId\":50850,\"journal\":{\"name\":\"Advances in Space Research\",\"volume\":\"76 9\",\"pages\":\"Pages 5383-5405\"},\"PeriodicalIF\":2.8000,\"publicationDate\":\"2025-08-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Advances in Space Research\",\"FirstCategoryId\":\"89\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0273117725009007\",\"RegionNum\":3,\"RegionCategory\":\"地球科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ASTRONOMY & ASTROPHYSICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Advances in Space Research","FirstCategoryId":"89","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0273117725009007","RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ASTRONOMY & ASTROPHYSICS","Score":null,"Total":0}
Multiframe spatio-temporal attention motion-adaptive network for moving space target detection
Space target detection based on optical observations is a fundamental approach for space situational awareness. However, in ground-based optical images, space targets often resemble stars, making single-frame differentiation challenging. The dense stellar background with high dynamic range further complicates object extraction. In real-world scenarios, varying relative motion between objects and the observation platform induces diverse apparent speeds, scales, and shapes, posing challenges for existing methods to exploit multiframe motion cues effectively, thereby limiting detection performance. To address these challenges, we propose the Multiframe Spatio-temporal Attention Motion-adaptive Network (MSAMNet) for robust small moving target detection in complex backgrounds. MSAMNet integrates Adaptive Attention Feature Enhancement (AAFE) and Spatio-Temporal Dynamic Motion-Aware (STDMA) modules to enhance spatio-temporal feature representation. AAFE generates attention maps based on channel-wise differences and local feature distributions to suppress noise and highlight target details. STDMA captures motion features across consecutive frames through exponential decay weighting and multi-scale heterogeneous kernel convolution, improving sensitivity to diverse motion patterns. Furthermore, we introduce BUAA-MSOD, the first open-source dataset of multiple moving space objects in real-world scenarios, covering diverse motion patterns and target morphologies with both mask and point-level annotations. Experimental results demonstrate that MSAMNet significantly outperforms state-of-the-art methods on BUAA-MSOD, achieving higher detection accuracy and lower false alarm rates across various space targets.
期刊介绍:
The COSPAR publication Advances in Space Research (ASR) is an open journal covering all areas of space research including: space studies of the Earth''s surface, meteorology, climate, the Earth-Moon system, planets and small bodies of the solar system, upper atmospheres, ionospheres and magnetospheres of the Earth and planets including reference atmospheres, space plasmas in the solar system, astrophysics from space, materials sciences in space, fundamental physics in space, space debris, space weather, Earth observations of space phenomena, etc.
NB: Please note that manuscripts related to life sciences as related to space are no more accepted for submission to Advances in Space Research. Such manuscripts should now be submitted to the new COSPAR Journal Life Sciences in Space Research (LSSR).
All submissions are reviewed by two scientists in the field. COSPAR is an interdisciplinary scientific organization concerned with the progress of space research on an international scale. Operating under the rules of ICSU, COSPAR ignores political considerations and considers all questions solely from the scientific viewpoint.