{"title":"Multi-modal Sensor Fusion Method Based on Kalman Filter","authors":"Jiankai Qin, Zongren Liu","doi":"10.1109/TOCS53301.2021.9688970","DOIUrl":null,"url":null,"abstract":"This paper proposes a multi-modal sensor fusion framework, which provides a method that meets both the accuracy and real-time requirements to fuse multiple sensors, such as lidar, IMU sensors and wheel odometry, and can be used without visual features. Or achieve robust state estimation in scenarios where the spatial structure is degraded. Different from the Classical Algorithm, this article introduces the system that uses the IMU odometry as the main processing thread, and combines the advantages of loose coupling and tight coupling methods to restore motion by error correction. The framework consists of four parts: IMU odometry, wheel odometry, lidar inertial odometry and data preprocessing module. The data preprocessing module accepts IMU raw data, wheel odometry data and lidar positioning data, and provides synchronized data for IMU odometry. The IMU odometry uses the angular velocity integral provided by the IMU to obtain the direction information, combined with the speed information provided by the wheel odometry, to obtain high-frequency positioning information, and receives movement observations from the lidar inertial odometry for pose correction. The high-frequency output data of the IMU odometry provides input to the tightly coupled part of the lidar inertial odometry, further improving the accuracy of the lidar inertial odometry. The system was tested on a small unmanned vehicle, and the test results were quite satisfactory.","PeriodicalId":360004,"journal":{"name":"2021 IEEE Conference on Telecommunications, Optics and Computer Science (TOCS)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE Conference on Telecommunications, Optics and Computer Science (TOCS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/TOCS53301.2021.9688970","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
This paper proposes a multi-modal sensor fusion framework, which provides a method that meets both the accuracy and real-time requirements to fuse multiple sensors, such as lidar, IMU sensors and wheel odometry, and can be used without visual features. Or achieve robust state estimation in scenarios where the spatial structure is degraded. Different from the Classical Algorithm, this article introduces the system that uses the IMU odometry as the main processing thread, and combines the advantages of loose coupling and tight coupling methods to restore motion by error correction. The framework consists of four parts: IMU odometry, wheel odometry, lidar inertial odometry and data preprocessing module. The data preprocessing module accepts IMU raw data, wheel odometry data and lidar positioning data, and provides synchronized data for IMU odometry. The IMU odometry uses the angular velocity integral provided by the IMU to obtain the direction information, combined with the speed information provided by the wheel odometry, to obtain high-frequency positioning information, and receives movement observations from the lidar inertial odometry for pose correction. The high-frequency output data of the IMU odometry provides input to the tightly coupled part of the lidar inertial odometry, further improving the accuracy of the lidar inertial odometry. The system was tested on a small unmanned vehicle, and the test results were quite satisfactory.