Ioannis Asmanis, P. Mermigkas, G. Chalvatzaki, Jan-Martin Peters, P. Maragos
{"title":"A Semantic Enhancement of Unified Geometric Representations for Improving Indoor Visual SLAM","authors":"Ioannis Asmanis, P. Mermigkas, G. Chalvatzaki, Jan-Martin Peters, P. Maragos","doi":"10.1109/ur55393.2022.9826249","DOIUrl":null,"url":null,"abstract":"Over the last two decades, visual SLAM research has taken a turn towards using geometric structures more complex than points for describing indoor environments. At the same time, semantic information is becoming increasingly available to robotic applications, improving robots’ perceptive capabilities. In this work, we introduce a method for uniting these two approaches. Namely, we propose a novel mechanism for propagating semantics directly into the optimization level of an RGB-D SLAM framework. This framework internally uses unified geometric representations to jointly describe points, lines and planes. We also validate our approach with experiments on various datasets, both synthetic and real-world, with comparisons against representative systems from the literature.","PeriodicalId":398742,"journal":{"name":"2022 19th International Conference on Ubiquitous Robots (UR)","volume":"157 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 19th International Conference on Ubiquitous Robots (UR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ur55393.2022.9826249","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Over the last two decades, visual SLAM research has taken a turn towards using geometric structures more complex than points for describing indoor environments. At the same time, semantic information is becoming increasingly available to robotic applications, improving robots’ perceptive capabilities. In this work, we introduce a method for uniting these two approaches. Namely, we propose a novel mechanism for propagating semantics directly into the optimization level of an RGB-D SLAM framework. This framework internally uses unified geometric representations to jointly describe points, lines and planes. We also validate our approach with experiments on various datasets, both synthetic and real-world, with comparisons against representative systems from the literature.