Naman Patel, F. Khorrami, P. Krishnamurthy, A. Tzes
{"title":"紧密耦合语义RGB-D惯性里程计精确的长期定位和映射","authors":"Naman Patel, F. Khorrami, P. Krishnamurthy, A. Tzes","doi":"10.1109/ICAR46387.2019.8981658","DOIUrl":null,"url":null,"abstract":"In this paper, we utilize semantically enhanced feature matching and visual inertial bundle adjustment to improve the robustness of odometry especially in feature-sparse environments. A novel semantically enhanced feature matching algorithm is developed for robust: 1) medium and long-term tracking, and 2) loop-closing. Additionally, a semantic visual inertial bundle adjustment algorithm is introduced to robustly estimate pose in presence of ambiguous correspondences or in feature sparse environment. Our tightly coupled semantic RGB-D odometry approach is demonstrated on a real world indoor dataset collected using our unmanned ground vehicle (UGV). Our approach improves traditional visual odometry relying on low-level geometric features like corners, points, and planes for localization and mapping. Additionally, prior approaches are limited due to their sensitivity to scene geometry and changes in light intensity. The semantic inertial odometry is especially important to significantly reduce drifts in longer intervals.","PeriodicalId":6606,"journal":{"name":"2019 19th International Conference on Advanced Robotics (ICAR)","volume":"23 1","pages":"523-528"},"PeriodicalIF":0.0000,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"Tightly Coupled Semantic RGB-D Inertial Odometry for Accurate Long-Term Localization and Mapping\",\"authors\":\"Naman Patel, F. Khorrami, P. Krishnamurthy, A. Tzes\",\"doi\":\"10.1109/ICAR46387.2019.8981658\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper, we utilize semantically enhanced feature matching and visual inertial bundle adjustment to improve the robustness of odometry especially in feature-sparse environments. A novel semantically enhanced feature matching algorithm is developed for robust: 1) medium and long-term tracking, and 2) loop-closing. Additionally, a semantic visual inertial bundle adjustment algorithm is introduced to robustly estimate pose in presence of ambiguous correspondences or in feature sparse environment. Our tightly coupled semantic RGB-D odometry approach is demonstrated on a real world indoor dataset collected using our unmanned ground vehicle (UGV). Our approach improves traditional visual odometry relying on low-level geometric features like corners, points, and planes for localization and mapping. Additionally, prior approaches are limited due to their sensitivity to scene geometry and changes in light intensity. The semantic inertial odometry is especially important to significantly reduce drifts in longer intervals.\",\"PeriodicalId\":6606,\"journal\":{\"name\":\"2019 19th International Conference on Advanced Robotics (ICAR)\",\"volume\":\"23 1\",\"pages\":\"523-528\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 19th International Conference on Advanced Robotics (ICAR)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICAR46387.2019.8981658\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 19th International Conference on Advanced Robotics (ICAR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICAR46387.2019.8981658","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Tightly Coupled Semantic RGB-D Inertial Odometry for Accurate Long-Term Localization and Mapping
In this paper, we utilize semantically enhanced feature matching and visual inertial bundle adjustment to improve the robustness of odometry especially in feature-sparse environments. A novel semantically enhanced feature matching algorithm is developed for robust: 1) medium and long-term tracking, and 2) loop-closing. Additionally, a semantic visual inertial bundle adjustment algorithm is introduced to robustly estimate pose in presence of ambiguous correspondences or in feature sparse environment. Our tightly coupled semantic RGB-D odometry approach is demonstrated on a real world indoor dataset collected using our unmanned ground vehicle (UGV). Our approach improves traditional visual odometry relying on low-level geometric features like corners, points, and planes for localization and mapping. Additionally, prior approaches are limited due to their sensitivity to scene geometry and changes in light intensity. The semantic inertial odometry is especially important to significantly reduce drifts in longer intervals.