{"title":"DGNSS-Vision Integration for Robust and Accurate Relative Spacecraft Navigation","authors":"V. Capuano, A. Harvard, Yvette Lin, Soon-Jo Chung","doi":"10.33012/2019.16961","DOIUrl":null,"url":null,"abstract":"Relative spacecraft navigation based on Global Navigation Satellite System (GNSS) has been already successfully performed in low earth orbit (LEO). Very high accuracy, of the order of the millimeter, has been achieved in postprocessing using carrier phase differential GNSS (CDGNSS) and recovering the integer number of wavelength (Ambiguity) \nbetween the GNSS transmitters and the receiver. However the performance achievable on-board, in real time, \nabove LEO and the GNSS constellation would be significantly lower due to limited computational resources, weaker \nsignals, and worse geometric dilution of precision (GDOP). At the same time, monocular vision provides lower accuracy \nthan CDGNSS when there is significant spacecraft separation, and it becomes even lower for larger baselines and wider field of views (FOVs). In order to increase the robustness, continuity, and accuracy of a real-time on-board \nGNSS-based relative navigation solution in a GNSS degraded environment such as Geosynchronous and High Earth \nOrbits, we propose a novel navigation architecture based on a tight fusion of carrier phase GNSS observations and \nmonocular vision-based measurements, which enables fast autonomous relative pose estimation of cooperative spacecraft \nalso in case of high GDOP and low GNSS visibility, where the GNSS signals are degraded, weak, or cannot be \ntracked continuously. \nIn this paper we describe the architecture and implementation of a multi-sensor navigation solution and validate the \nproposed method in simulation. We use a dataset of images synthetically generated according to a chaser/target relative \nmotion in Geostationary Earth Orbit (GEO) and realistic carrier phase and code-based GNSS observations simulated \nat the receiver position in the same orbits. We demonstrate that our fusion solution provides higher accuracy, higher \nrobustness, and faster ambiguity resolution in case of degraded GNSS signal conditions, even when using high FOV \ncameras.","PeriodicalId":381025,"journal":{"name":"Proceedings of the 32nd International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GNSS+ 2019)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"11","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 32nd International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GNSS+ 2019)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.33012/2019.16961","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 11
Abstract
Relative spacecraft navigation based on Global Navigation Satellite System (GNSS) has been already successfully performed in low earth orbit (LEO). Very high accuracy, of the order of the millimeter, has been achieved in postprocessing using carrier phase differential GNSS (CDGNSS) and recovering the integer number of wavelength (Ambiguity)
between the GNSS transmitters and the receiver. However the performance achievable on-board, in real time,
above LEO and the GNSS constellation would be significantly lower due to limited computational resources, weaker
signals, and worse geometric dilution of precision (GDOP). At the same time, monocular vision provides lower accuracy
than CDGNSS when there is significant spacecraft separation, and it becomes even lower for larger baselines and wider field of views (FOVs). In order to increase the robustness, continuity, and accuracy of a real-time on-board
GNSS-based relative navigation solution in a GNSS degraded environment such as Geosynchronous and High Earth
Orbits, we propose a novel navigation architecture based on a tight fusion of carrier phase GNSS observations and
monocular vision-based measurements, which enables fast autonomous relative pose estimation of cooperative spacecraft
also in case of high GDOP and low GNSS visibility, where the GNSS signals are degraded, weak, or cannot be
tracked continuously.
In this paper we describe the architecture and implementation of a multi-sensor navigation solution and validate the
proposed method in simulation. We use a dataset of images synthetically generated according to a chaser/target relative
motion in Geostationary Earth Orbit (GEO) and realistic carrier phase and code-based GNSS observations simulated
at the receiver position in the same orbits. We demonstrate that our fusion solution provides higher accuracy, higher
robustness, and faster ambiguity resolution in case of degraded GNSS signal conditions, even when using high FOV
cameras.