{"title":"基于目标检测的动态场景视觉SLAM","authors":"Xinhua Zhao, Lei Ye","doi":"10.1109/ICMA54519.2022.9856202","DOIUrl":null,"url":null,"abstract":"Simultaneous localization and mapping (SLAM) is a crucial part of intelligent mobile robots. Nevertheless, most classical visual SLAM methods currently operate in static environments. As a result, in dynamic scenes, localization is unreliable. This paper proposes a robust visual SLAM for dynamic scenes called DO-SLAM. DO-SLAM consists of five parallel running threads: tracking, object detection, local mapping, loop closure, and octree map. By fusion object detection with motion information check, the dynamic feature points of the image sequence are searched, and the potential dynamic points in the image are removed using an adaptive range image moving point removal technique based on the dynamic feature points. Meanwhile, a dense 3D octree map is generated, which can be used for navigation and obstacle avoidance of intelligent mobile robots. Experimental results in the TUM RGB-D dataset show that the absolute trajectory accuracy of DO-SLAM is improved by 92.6% in high dynamic sequences compared to ORB-SLAM2, while there is little difference in accuracy compared to DS-SLAM, but the real-time performance is significantly enhanced.","PeriodicalId":120073,"journal":{"name":"2022 IEEE International Conference on Mechatronics and Automation (ICMA)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2022-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Object Detection-based Visual SLAM for Dynamic Scenes\",\"authors\":\"Xinhua Zhao, Lei Ye\",\"doi\":\"10.1109/ICMA54519.2022.9856202\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Simultaneous localization and mapping (SLAM) is a crucial part of intelligent mobile robots. Nevertheless, most classical visual SLAM methods currently operate in static environments. As a result, in dynamic scenes, localization is unreliable. This paper proposes a robust visual SLAM for dynamic scenes called DO-SLAM. DO-SLAM consists of five parallel running threads: tracking, object detection, local mapping, loop closure, and octree map. By fusion object detection with motion information check, the dynamic feature points of the image sequence are searched, and the potential dynamic points in the image are removed using an adaptive range image moving point removal technique based on the dynamic feature points. Meanwhile, a dense 3D octree map is generated, which can be used for navigation and obstacle avoidance of intelligent mobile robots. Experimental results in the TUM RGB-D dataset show that the absolute trajectory accuracy of DO-SLAM is improved by 92.6% in high dynamic sequences compared to ORB-SLAM2, while there is little difference in accuracy compared to DS-SLAM, but the real-time performance is significantly enhanced.\",\"PeriodicalId\":120073,\"journal\":{\"name\":\"2022 IEEE International Conference on Mechatronics and Automation (ICMA)\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-08-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE International Conference on Mechatronics and Automation (ICMA)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICMA54519.2022.9856202\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE International Conference on Mechatronics and Automation (ICMA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICMA54519.2022.9856202","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Object Detection-based Visual SLAM for Dynamic Scenes
Simultaneous localization and mapping (SLAM) is a crucial part of intelligent mobile robots. Nevertheless, most classical visual SLAM methods currently operate in static environments. As a result, in dynamic scenes, localization is unreliable. This paper proposes a robust visual SLAM for dynamic scenes called DO-SLAM. DO-SLAM consists of five parallel running threads: tracking, object detection, local mapping, loop closure, and octree map. By fusion object detection with motion information check, the dynamic feature points of the image sequence are searched, and the potential dynamic points in the image are removed using an adaptive range image moving point removal technique based on the dynamic feature points. Meanwhile, a dense 3D octree map is generated, which can be used for navigation and obstacle avoidance of intelligent mobile robots. Experimental results in the TUM RGB-D dataset show that the absolute trajectory accuracy of DO-SLAM is improved by 92.6% in high dynamic sequences compared to ORB-SLAM2, while there is little difference in accuracy compared to DS-SLAM, but the real-time performance is significantly enhanced.