{"title":"LiDAR-Visual Fusion SLAM for Autonomous Vehicle Location","authors":"Qinglu Ma;Qiuwei Jian;Meiqiang Li;Saleem Ullah","doi":"10.1109/JIOT.2025.3557148","DOIUrl":null,"url":null,"abstract":"The simultaneous localization and mapping (SLAM) is indispensable to Autonomous Vehicle (AV). However, the visual images are susceptible to light interference, and light detection and ranging (LiDAR) depends heavily on geometric features of the surrounding scene, relying solely on a camera or LiDAR exhibits limitations in challenging environments. To solve these problems, we propose an LiDAR-visual fusion method for high precision and robust vehicle localization. Compared with the previous LiDAR-visual fusion method, the proposed method fully utilizes the sensor’s measurement data for fusion in each part. First, an LiDAR vision frame is constructed at the front end, then the LiDAR is used to assist the vision in obtaining the depth information and tracking. In the closed-loop recognition part, a logic judgment module is introduced, and the LiDAR point cloud assists in the vision for loop closure correction to reduce the positioning error. Additionally, a visual-assisted LiDAR method for 3-D scene reconstruction is proposed. Experiments in real scenes show that the average positioning errors are 2.065, 1.9, and 2.9 cm in x, y, and z-directions, respectively; and the average rotation errors are 0.11 rad, 0.11 rad, and 0.13 rad in roll, pitch, yaw. The average positioning time is 29.98 ms. Compared with the classical ORB-SLAM2, LeGO-LOAM, DEMO, and TVL-SLAM algorithms, the proposed method demonstrates superior accuracy, robustness, and real-time performance.","PeriodicalId":54347,"journal":{"name":"IEEE Internet of Things Journal","volume":"12 13","pages":"25197-25210"},"PeriodicalIF":8.9000,"publicationDate":"2025-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Internet of Things Journal","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10949615/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
The simultaneous localization and mapping (SLAM) is indispensable to Autonomous Vehicle (AV). However, the visual images are susceptible to light interference, and light detection and ranging (LiDAR) depends heavily on geometric features of the surrounding scene, relying solely on a camera or LiDAR exhibits limitations in challenging environments. To solve these problems, we propose an LiDAR-visual fusion method for high precision and robust vehicle localization. Compared with the previous LiDAR-visual fusion method, the proposed method fully utilizes the sensor’s measurement data for fusion in each part. First, an LiDAR vision frame is constructed at the front end, then the LiDAR is used to assist the vision in obtaining the depth information and tracking. In the closed-loop recognition part, a logic judgment module is introduced, and the LiDAR point cloud assists in the vision for loop closure correction to reduce the positioning error. Additionally, a visual-assisted LiDAR method for 3-D scene reconstruction is proposed. Experiments in real scenes show that the average positioning errors are 2.065, 1.9, and 2.9 cm in x, y, and z-directions, respectively; and the average rotation errors are 0.11 rad, 0.11 rad, and 0.13 rad in roll, pitch, yaw. The average positioning time is 29.98 ms. Compared with the classical ORB-SLAM2, LeGO-LOAM, DEMO, and TVL-SLAM algorithms, the proposed method demonstrates superior accuracy, robustness, and real-time performance.
期刊介绍:
The EEE Internet of Things (IoT) Journal publishes articles and review articles covering various aspects of IoT, including IoT system architecture, IoT enabling technologies, IoT communication and networking protocols such as network coding, and IoT services and applications. Topics encompass IoT's impacts on sensor technologies, big data management, and future internet design for applications like smart cities and smart homes. Fields of interest include IoT architecture such as things-centric, data-centric, service-oriented IoT architecture; IoT enabling technologies and systematic integration such as sensor technologies, big sensor data management, and future Internet design for IoT; IoT services, applications, and test-beds such as IoT service middleware, IoT application programming interface (API), IoT application design, and IoT trials/experiments; IoT standardization activities and technology development in different standard development organizations (SDO) such as IEEE, IETF, ITU, 3GPP, ETSI, etc.