Francesco Crocetti, Raffaele Brilli, Alberto Dionigi, Mario L. Fravolini, Gabriele Costante, Paolo Valigi
{"title":"Comparison of DSO and ORB-SLAM3 in Low-Light Environments With Auxiliary Lighting and Deep Learning Based Image Enhancing","authors":"Francesco Crocetti, Raffaele Brilli, Alberto Dionigi, Mario L. Fravolini, Gabriele Costante, Paolo Valigi","doi":"10.1002/rob.22595","DOIUrl":null,"url":null,"abstract":"<p>In the evolving landscape of robotic navigation, the demand for solutions capable of operating in challenging scenarios, such as low-light environments, is increasing. This study investigates the performance of two state-of-the-art (SOTA) visual simultaneous localization and mapping (VSLAM) algorithms, direct sparse odometry (DSO) and ORBSLAM3, in their monocular implementation, in the dark indoor scenarios where the only light source is provided by an auxiliary light system installed on a robot. A modified Pioneer3-DX robot, equipped with a monocular camera, LED bars, and a lux meter, is utilized to collect a novel data set, “LUCID—Lighting Up Campus Indoor Spaces Data Set,” in real-world, low-light indoor environments. The data set includes image sequences enhanced using a generative adversarial network (GAN) to simulate varying levels of image enhancement. Through comprehensive experiments, we assess the performances of the V-SLAM algorithm, considering the critical balance between maintaining adequate auxiliary illumination and enhancing. This study provides insights into the optimization of robotic navigation in lowlight conditions, paving the way for more robust and reliable autonomous navigation systems.</p>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"42 7","pages":"3748-3771"},"PeriodicalIF":5.2000,"publicationDate":"2025-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/rob.22595","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Field Robotics","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/rob.22595","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ROBOTICS","Score":null,"Total":0}
引用次数: 0
Abstract
In the evolving landscape of robotic navigation, the demand for solutions capable of operating in challenging scenarios, such as low-light environments, is increasing. This study investigates the performance of two state-of-the-art (SOTA) visual simultaneous localization and mapping (VSLAM) algorithms, direct sparse odometry (DSO) and ORBSLAM3, in their monocular implementation, in the dark indoor scenarios where the only light source is provided by an auxiliary light system installed on a robot. A modified Pioneer3-DX robot, equipped with a monocular camera, LED bars, and a lux meter, is utilized to collect a novel data set, “LUCID—Lighting Up Campus Indoor Spaces Data Set,” in real-world, low-light indoor environments. The data set includes image sequences enhanced using a generative adversarial network (GAN) to simulate varying levels of image enhancement. Through comprehensive experiments, we assess the performances of the V-SLAM algorithm, considering the critical balance between maintaining adequate auxiliary illumination and enhancing. This study provides insights into the optimization of robotic navigation in lowlight conditions, paving the way for more robust and reliable autonomous navigation systems.
期刊介绍:
The Journal of Field Robotics seeks to promote scholarly publications dealing with the fundamentals of robotics in unstructured and dynamic environments.
The Journal focuses on experimental robotics and encourages publication of work that has both theoretical and practical significance.