Jérémy Lebreton, Ingo Ahrns, Roland Brochard, Christoph Haskamp, Matthieu Le Goff, Nicolas Menga, Nicolas Ollagnier, Ralf Regele, Francesco Capolupo, Massimo Casasco
{"title":"Training Datasets Generation for Machine Learning: Application to Vision Based Navigation","authors":"Jérémy Lebreton, Ingo Ahrns, Roland Brochard, Christoph Haskamp, Matthieu Le Goff, Nicolas Menga, Nicolas Ollagnier, Ralf Regele, Francesco Capolupo, Massimo Casasco","doi":"arxiv-2409.11383","DOIUrl":null,"url":null,"abstract":"Vision Based Navigation consists in utilizing cameras as precision sensors\nfor GNC after extracting information from images. To enable the adoption of\nmachine learning for space applications, one of obstacles is the demonstration\nthat available training datasets are adequate to validate the algorithms. The\nobjective of the study is to generate datasets of images and metadata suitable\nfor training machine learning algorithms. Two use cases were selected and a\nrobust methodology was developed to validate the datasets including the ground\ntruth. The first use case is in-orbit rendezvous with a man-made object: a\nmockup of satellite ENVISAT. The second use case is a Lunar landing scenario.\nDatasets were produced from archival datasets (Chang'e 3), from the laboratory\nat DLR TRON facility and at Airbus Robotic laboratory, from SurRender software\nhigh fidelity image simulator using Model Capture and from Generative\nAdversarial Networks. The use case definition included the selection of\nalgorithms as benchmark: an AI-based pose estimation algorithm and a dense\noptical flow algorithm were selected. Eventually it is demonstrated that\ndatasets produced with SurRender and selected laboratory facilities are\nadequate to train machine learning algorithms.","PeriodicalId":501209,"journal":{"name":"arXiv - PHYS - Earth and Planetary Astrophysics","volume":"7 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - PHYS - Earth and Planetary Astrophysics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.11383","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Vision Based Navigation consists in utilizing cameras as precision sensors
for GNC after extracting information from images. To enable the adoption of
machine learning for space applications, one of obstacles is the demonstration
that available training datasets are adequate to validate the algorithms. The
objective of the study is to generate datasets of images and metadata suitable
for training machine learning algorithms. Two use cases were selected and a
robust methodology was developed to validate the datasets including the ground
truth. The first use case is in-orbit rendezvous with a man-made object: a
mockup of satellite ENVISAT. The second use case is a Lunar landing scenario.
Datasets were produced from archival datasets (Chang'e 3), from the laboratory
at DLR TRON facility and at Airbus Robotic laboratory, from SurRender software
high fidelity image simulator using Model Capture and from Generative
Adversarial Networks. The use case definition included the selection of
algorithms as benchmark: an AI-based pose estimation algorithm and a dense
optical flow algorithm were selected. Eventually it is demonstrated that
datasets produced with SurRender and selected laboratory facilities are
adequate to train machine learning algorithms.