{"title":"Using 6 DOF vision-inertial tracking to evaluate and improve low cost depth sensor based SLAM","authors":"Thomas Calloway, D. Megherbi","doi":"10.1109/CIVEMSA.2016.7524314","DOIUrl":null,"url":null,"abstract":"Systems that use low cost depth sensors, to perform 3D reconstructions of environments while simultaneously tracking sensor pose, have received significant attention in recent years. While the majority of publications in the literature on the subject focus on the successes of various 3D scene reconstruction algorithms used, few attempt to quantify the practical limitations of the RGB-D sensors themselves. Furthermore, many publications report successful results while ignoring the many situations in which the systems will be entirely non-functional. In our prior work, using an optical-inertial motion tracker, we evaluated 3 Degree-Of-Freedom (3 DOF) sensor orientation estimation errors existing in a Simultaneous Localization and Mapping (SLAM) implementation based on the popular Microsoft Kinect. In this paper we present and extend our analysis of 3 DOF sensor orientation estimation error, using an optical-inertial motion tracker, to include the full 6 DOF sensor pose (positioning and orientation). We then fully integrate the motion tracker into the original depth sensor-based algorithm, demonstrating improved reliability and accuracy of scene reconstruction.","PeriodicalId":244122,"journal":{"name":"2016 IEEE International Conference on Computational Intelligence and Virtual Environments for Measurement Systems and Applications (CIVEMSA)","volume":"445 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 IEEE International Conference on Computational Intelligence and Virtual Environments for Measurement Systems and Applications (CIVEMSA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CIVEMSA.2016.7524314","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
Systems that use low cost depth sensors, to perform 3D reconstructions of environments while simultaneously tracking sensor pose, have received significant attention in recent years. While the majority of publications in the literature on the subject focus on the successes of various 3D scene reconstruction algorithms used, few attempt to quantify the practical limitations of the RGB-D sensors themselves. Furthermore, many publications report successful results while ignoring the many situations in which the systems will be entirely non-functional. In our prior work, using an optical-inertial motion tracker, we evaluated 3 Degree-Of-Freedom (3 DOF) sensor orientation estimation errors existing in a Simultaneous Localization and Mapping (SLAM) implementation based on the popular Microsoft Kinect. In this paper we present and extend our analysis of 3 DOF sensor orientation estimation error, using an optical-inertial motion tracker, to include the full 6 DOF sensor pose (positioning and orientation). We then fully integrate the motion tracker into the original depth sensor-based algorithm, demonstrating improved reliability and accuracy of scene reconstruction.