{"title":"Multiple Dynamic Object Tracking for Visual SLAM","authors":"Fuxin Liu Hubei, Yanduo Zhang, Xun Li","doi":"10.1109/ICRCV55858.2022.9953172","DOIUrl":null,"url":null,"abstract":"The assumption based on scene rigidity has been accepted widely in visual SLAM framework. However, the supposition limits the development of SLAM algorithm in the real world. Especially for automatic driving, many complicate cases involved in it, which demands our SLAM system that provides accurate position robustly and perceives the surrounding environment reliably. Therefore, in this paper, we propose a novel visual SLAM front-end module, which uses instance segmentation and dense optical flow estimation to ensure the efficient separation of static background and dynamic targets. For potential moving objects, we take advantage of Unscented Kalman Filter (UKF) to track moving targets and update the according moving state. In light of scale inconsistency in the camera pose estimation, we recover the scene structure and obtain the scale factor in the key frame by the depth estimation network. At the end, we integrate the estimated camera pose and dynamic object tracking into a unified visual odometry. In the process of trajectory optimization, we adopt the sliding window mechanism to acquire the spatio-temporal information of the dynamic object. The experiment results show that the tracking of dynamic objects not only can provide rich clues for surroundings understanding, but also help the tracking of camera pose, and then improve the robustness of the SLAM system in dynamic environment.","PeriodicalId":399667,"journal":{"name":"2022 4th International Conference on Robotics and Computer Vision (ICRCV)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 4th International Conference on Robotics and Computer Vision (ICRCV)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICRCV55858.2022.9953172","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
The assumption based on scene rigidity has been accepted widely in visual SLAM framework. However, the supposition limits the development of SLAM algorithm in the real world. Especially for automatic driving, many complicate cases involved in it, which demands our SLAM system that provides accurate position robustly and perceives the surrounding environment reliably. Therefore, in this paper, we propose a novel visual SLAM front-end module, which uses instance segmentation and dense optical flow estimation to ensure the efficient separation of static background and dynamic targets. For potential moving objects, we take advantage of Unscented Kalman Filter (UKF) to track moving targets and update the according moving state. In light of scale inconsistency in the camera pose estimation, we recover the scene structure and obtain the scale factor in the key frame by the depth estimation network. At the end, we integrate the estimated camera pose and dynamic object tracking into a unified visual odometry. In the process of trajectory optimization, we adopt the sliding window mechanism to acquire the spatio-temporal information of the dynamic object. The experiment results show that the tracking of dynamic objects not only can provide rich clues for surroundings understanding, but also help the tracking of camera pose, and then improve the robustness of the SLAM system in dynamic environment.