{"title":"直接2.5D激光雷达SLAM在室外动态环境中的自动驾驶*","authors":"Xuebo Tian, Jun Li, Junqiao Zhao, Chen Ye","doi":"10.1109/CVCI54083.2021.9661117","DOIUrl":null,"url":null,"abstract":"Robust localization and mapping in outdoor road scenes is challenging for autonomous driving because moving objects can have a huge impact on the accuracy and robustness of existing simultaneous localization and mapping (SLAM) methods. In this paper, a 2.5D LiDAR SLAM method based on direct method for dynamic scenes is proposed. This method can recognize and track dynamic objects and gradually integrate static potential dynamic object into SLAM optimization. This method first maps the 3D scan of the surrounding environment to a 2.5D height map. Object detection is then conducted to remove all the points that belong to potential dynamic objects (PDOs). High performance LiDAR odometry and loop detection are then implemented using direct height map matching and 2.5D descriptor-based matching, respectively. At the same time, through data association and tracking, the gradual separation of dynamic and static PDOs is achieved. Points that belong to the static PDOs are then gradually integrated into the SLAM system. Therefore, using as much static scene information as possible for SLAM can significantly improve the robustness and accuracy of SLAM. In addition, the resulting ego-poses are further used to accurately track PDOs, thereby improving their trajectory and speed estimation. Experiments on public dataset and our campus datasets shown that our method achieves better accuracy than SUMA++.","PeriodicalId":419836,"journal":{"name":"2021 5th CAA International Conference on Vehicular Control and Intelligence (CVCI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Direct 2.5D LiDAR SLAM in Outdoor Dynamic Environment for Autonomous Driving*\",\"authors\":\"Xuebo Tian, Jun Li, Junqiao Zhao, Chen Ye\",\"doi\":\"10.1109/CVCI54083.2021.9661117\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Robust localization and mapping in outdoor road scenes is challenging for autonomous driving because moving objects can have a huge impact on the accuracy and robustness of existing simultaneous localization and mapping (SLAM) methods. In this paper, a 2.5D LiDAR SLAM method based on direct method for dynamic scenes is proposed. This method can recognize and track dynamic objects and gradually integrate static potential dynamic object into SLAM optimization. This method first maps the 3D scan of the surrounding environment to a 2.5D height map. Object detection is then conducted to remove all the points that belong to potential dynamic objects (PDOs). High performance LiDAR odometry and loop detection are then implemented using direct height map matching and 2.5D descriptor-based matching, respectively. At the same time, through data association and tracking, the gradual separation of dynamic and static PDOs is achieved. Points that belong to the static PDOs are then gradually integrated into the SLAM system. Therefore, using as much static scene information as possible for SLAM can significantly improve the robustness and accuracy of SLAM. In addition, the resulting ego-poses are further used to accurately track PDOs, thereby improving their trajectory and speed estimation. Experiments on public dataset and our campus datasets shown that our method achieves better accuracy than SUMA++.\",\"PeriodicalId\":419836,\"journal\":{\"name\":\"2021 5th CAA International Conference on Vehicular Control and Intelligence (CVCI)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-10-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 5th CAA International Conference on Vehicular Control and Intelligence (CVCI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CVCI54083.2021.9661117\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 5th CAA International Conference on Vehicular Control and Intelligence (CVCI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CVCI54083.2021.9661117","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Direct 2.5D LiDAR SLAM in Outdoor Dynamic Environment for Autonomous Driving*
Robust localization and mapping in outdoor road scenes is challenging for autonomous driving because moving objects can have a huge impact on the accuracy and robustness of existing simultaneous localization and mapping (SLAM) methods. In this paper, a 2.5D LiDAR SLAM method based on direct method for dynamic scenes is proposed. This method can recognize and track dynamic objects and gradually integrate static potential dynamic object into SLAM optimization. This method first maps the 3D scan of the surrounding environment to a 2.5D height map. Object detection is then conducted to remove all the points that belong to potential dynamic objects (PDOs). High performance LiDAR odometry and loop detection are then implemented using direct height map matching and 2.5D descriptor-based matching, respectively. At the same time, through data association and tracking, the gradual separation of dynamic and static PDOs is achieved. Points that belong to the static PDOs are then gradually integrated into the SLAM system. Therefore, using as much static scene information as possible for SLAM can significantly improve the robustness and accuracy of SLAM. In addition, the resulting ego-poses are further used to accurately track PDOs, thereby improving their trajectory and speed estimation. Experiments on public dataset and our campus datasets shown that our method achieves better accuracy than SUMA++.